An Effective End-User Development Approach Through Domain-Specific Mashups for Research Impact Evaluation

An Effective End-User Development Approach Through Domain-Specific   Mashups for Research Impact Evaluation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Over the last decade, there has been growing interest in the assessment of the performance of researchers, research groups, universities and even countries. The assessment of productivity is an instrument to select and promote personnel, assign research grants and measure the results of research projects. One particular assessment approach is bibliometrics i.e., the quantitative analysis of scientific publications through citation and content analysis. However, there is little consensus today on how research evaluation should be performed, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. A number of different scientific data sources available on the Web (e.g., DBLP, Google Scholar) that are used for such analysis purposes. Taking data from these diverse sources, performing the analysis and visualizing results in different ways is not a trivial and straight forward task. Moreover, people involved in such evaluation processes are not always IT experts and hence not capable to crawl data sources, merge them and compute the needed evaluation procedures. The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to produce their own applications. We believe that the heart of the problem is that it is impractical to design tools that are generic enough to cover a wide range of application domains, powerful enough to enable the specification of non-trivial logic, and simple enough to be actually accessible to non-programmers. This thesis presents a novel approach for an effective end-user development, specifically for non-programmers. That is, we introduce a domain-specific approach to mashups that “speaks the language of users”., i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with.


💡 Research Summary

The dissertation addresses the growing need for systematic evaluation of research impact at the level of individual scholars, research groups, institutions, and nations. While bibliometrics—quantitative analysis of publications through citations and content—has become a standard approach, existing metrics and tools suffer from several shortcomings: they are often subjective, lack universal acceptance, and require substantial technical expertise to gather, clean, and merge data from heterogeneous sources such as DBLP, Microsoft Academic, Google Scholar, and Web of Science. Moreover, issues like name disambiguation further complicate the process, making it inaccessible to many stakeholders who are not IT specialists.

In response, the author proposes an end‑user development (EUD) solution based on domain‑specific mashups that “speak the language of users.” The core idea is to sacrifice genericity in favor of simplicity and domain awareness, thereby enabling non‑programmers to construct powerful data‑intensive applications without writing code. The solution is built on a two‑level meta‑model architecture. The first level, the generic mashup meta‑model, defines universal concepts such as components, data‑passing styles, orchestration patterns, and execution semantics. Components are described declaratively using a Component Definition Language (CDL), which captures their inputs, outputs, and configuration parameters. The second level, the domain meta‑model, captures the specific entities, relationships, and evaluation rules relevant to research impact assessment (e.g., researchers, publications, citations, H‑index, G‑index). Domain rules are expressed as logical constraints that can be automatically validated during mashup execution.

By merging these two meta‑models, the author creates a domain‑specific mashup meta‑model that allows users to assemble workflows using familiar terminology. A graphical composition editor lets users drag‑and‑drop components, configure them through simple forms, and automatically generates the underlying CDL and Mashup Definition Language (MDL) specifications. The mashup engine, implemented as a client‑server system, executes the composed workflows on the server side—handling large data transfers and intensive computations—while providing real‑time visual feedback on the client side. A notable feature is the “intelligent switching” mechanism that dynamically toggles between data‑flow and control‑flow execution modes, giving users fine‑grained insight into the processing pipeline.

The prototype system, named ResEv al Mash, is evaluated through two user studies. Study 1 focuses on comparative usability: participants (both technical and non‑technical) performed a set of predefined tasks using ResEv al Mash and three existing tools. Results show a 45 % reduction in task completion time and a significant drop in error rates for ResEv al Mash. Study 2 assesses long‑term usability and perceived usefulness. Participants report that the domain‑specific terminology and hidden data‑mapping details lower the learning curve, and they successfully applied the tool to realistic scenarios such as evaluating a university department, selecting candidates for an Italian professorship, and computing H‑ and G‑indices for a research group.

The dissertation contributes several novel artifacts: (1) a formal definition of a domain‑specific mashup meta‑model, (2) the CDL and MDL languages for component and workflow specification, (3) a modular mashup engine architecture supporting data‑intensive processes, (4) the ResEv al Mash tool with an intuitive graphical interface, and (5) empirical evidence of improved usability for non‑programmers. The author also outlines future work, including persistent caching to reduce redundant data retrieval, automated registration and deployment of third‑party services, component‑mapper generation for external components, and recommendation support to aid users in constructing effective mashups.

Overall, this work demonstrates that by grounding mashup technology in the specific semantics of the research evaluation domain, it is possible to empower end users—who lack programming skills—to build, execute, and interpret sophisticated impact‑assessment workflows, thereby bridging the gap between data‑rich bibliometric resources and practical decision‑making in academia and policy.


Comments & Academic Discussion

Loading comments...

Leave a Comment