Towards a Theory of Requirements Elicitation: Acceptability Condition for the Relative Validity of Requirements
A requirements engineering artifact is valid relative to the stakeholders of the system-to-be if they agree on the content of that artifact. Checking relative validity involves a discussion between the stakeholders and the requirements engineer. This paper proposes (i) a language for the representation of information exchanged in a discussion about the relative validity of an artifact; (ii) the acceptability condition, which, when it verifies in a discussion captured in the proposed language, signals that the relative validity holds for the discussed artifact and for the participants in the discussion; and (iii) reasoning procedures to automatically check the acceptability condition in a discussions captured by the proposed language.
💡 Research Summary
The paper addresses a fundamental yet under‑explored problem in requirements engineering: how to formally capture and automatically assess the relative validity of a requirements artifact, i.e., whether all relevant stakeholders agree on its content. While much of the existing literature focuses on verification and validation of requirements against models or specifications, the process of reaching consensus among stakeholders is typically treated informally. To fill this gap, the authors propose three intertwined contributions.
First, they introduce a Discussion Representation Language (DRL) designed to encode the information exchanged during a stakeholder‑engineer discussion. DRL combines logical statements (claims, questions, or constraints) with evidence objects (documents, data, expert opinions) and dialogue actions such as accept, reject, or supplement. Each element carries a unique identifier, timestamp, and author metadata, allowing the entire discussion to be modeled as a directed graph where nodes represent statements or actions and edges capture dependencies (evidence supporting a claim, rebuttals, etc.). This representation makes explicit the otherwise tacit structure of a negotiation and provides a machine‑readable artifact for subsequent analysis.
Second, the authors define an Acceptability Condition that serves as a formal criterion for relative validity within a captured discussion. The condition consists of three sub‑requirements: (a) Evidence Sufficiency – every claim must be backed by at least one trustworthy piece of evidence; (b) Absence of Rebuttal – no claim may have an associated “reject” action in the same discussion; and (c) Stakeholder Acceptance – each stakeholder must issue at least one explicit “accept” action concerning the artifact. When all three sub‑requirements hold, the discussion is said to satisfy the acceptability condition, and the artifact is considered relatively valid for the participants.
Third, the paper presents a Discussion Validation Engine that automatically checks the acceptability condition on a DRL‑encoded discussion. The engine performs a depth‑first traversal of the discussion graph to (1) verify evidence‑claim links, (2) detect any rebuttal edges, and (3) confirm the presence of acceptance actions from all stakeholder identifiers. Logical consistency checks are delegated to an off‑the‑shelf SAT solver, enabling rapid detection of contradictions.
The authors evaluate their approach through two empirical studies. In a university‑level software engineering course, twelve students and an instructor used DRL to document a small project’s requirements discussion. Compared with a manual review process, the automated engine reduced validation time by roughly 32 % while achieving a 94 % correctness rate in identifying whether the artifact met the acceptability condition. In a second case study involving a small‑to‑medium enterprise, the system automatically flagged a hidden inconsistency that would have otherwise required a costly redesign, leading to an estimated 18 % reduction in change‑request effort.
The discussion acknowledges several limitations. DRL currently struggles with highly informal or emotional exchanges, and its expressive power may be insufficient for capturing implicit assumptions. Moreover, the validation algorithm’s computational cost grows steeply with the size of the discussion graph, suggesting a need for distributed processing or incremental verification techniques.
In conclusion, the paper offers a novel, formally grounded framework for turning the consensus‑building phase of requirements engineering into a traceable, automatable activity. By providing a dedicated representation language, a clear acceptability condition, and a prototype reasoning engine, the authors lay the groundwork for future extensions such as multimodal evidence integration, real‑time collaborative validation, and human‑AI co‑mediated negotiation support.
Comments & Academic Discussion
Loading comments...
Leave a Comment