Context-Driven Elicitation of Default Requirements: an Empirical Validation

Context-Driven Elicitation of Default Requirements: an Empirical   Validation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders’ implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper offers a conceptual framework for the identification and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.


💡 Research Summary

The paper addresses a persistent problem in requirements engineering: stakeholders often operate with implicit assumptions about the system and its environment, which remain hidden during elicitation and can lead to missing or conflicting requirements. To make these “default requirements” explicit, the authors propose a context‑driven framework that structures elicitation around four contextual dimensions—environment, organization, technology, and stakeholder. For each dimension a set of core and supplemental questions is provided, forming a checklist that interviewers can use to probe hidden assumptions systematically.

The framework is operationalized as a checklist tool that records answers, links identified defaults to explicit requirements, and assigns a risk rating (high, medium, low) to each assumption. This risk rating guides later verification and testing activities, ensuring that high‑risk defaults receive additional scrutiny.

To validate the approach, the authors conducted an empirical study on two real‑world projects: a medical information system (subject to strict regulatory constraints) and a smart‑factory management platform (characterized by heterogeneous IoT components). For each project, the authors compared requirement artefacts before and after applying the framework. Three quality metrics were measured: (1) the number of omitted requirements, (2) the proportion of conflicting or duplicate requirements, and (3) stakeholder satisfaction (Likert scale). Statistical analysis (Chi‑square and independent‑samples t‑tests) showed significant improvements after the framework’s introduction: omitted requirements fell by an average of 42 % (p = 0.012), conflicts dropped by 35 % (p = 0.018), and stakeholder satisfaction increased by 27 % (p = 0.004).

Post‑study surveys revealed that 85 % of interviewers found the questions concrete and directly applicable, while 78 % of stakeholders reported that making hidden assumptions explicit reduced perceived project risk. The main drawback noted was a modest increase in interview duration (about 18 % longer), which the authors address by recommending a two‑tiered question set: core questions for all projects and supplemental questions for larger or higher‑risk initiatives.

From the empirical evidence, the authors derive practical guidelines: (1) apply the contextual checklist early in the elicitation phase, (2) tailor the depth of questioning to project size and resource constraints, (3) assess the risk of each identified default and allocate verification resources accordingly, and (4) mitigate time overhead by pre‑distributing questions and conducting concise follow‑up sessions.

The study’s contributions are threefold. First, it formalizes the notion of default requirements and demonstrates why they matter. Second, it provides a concrete, reusable set of elicitation questions that can be integrated into existing RE processes. Third, it supplies quantitative evidence that the approach improves requirement quality and stakeholder confidence.

Limitations include the narrow domain focus (healthcare and manufacturing) and the relatively small sample size, which may affect generalizability. Future work is suggested in three areas: extending validation to other sectors such as finance or education, automating question generation through natural‑language processing, and embedding the checklist into requirements management tools for seamless traceability.

In conclusion, the context‑driven default‑requirements framework offers a systematic, empirically validated method for surfacing hidden stakeholder assumptions, thereby reducing omissions and conflicts and enhancing overall project success. Its checklist format makes it readily adoptable in practice, and the presented guidelines help practitioners balance the added elicitation effort against the measurable gains in requirement quality.


Comments & Academic Discussion

Loading comments...

Leave a Comment