A unified setting for inference and decision: An argumentation-based approach
Inferring from inconsistency and making decisions are two problems which have always been treated separately by researchers in Artificial Intelligence. Consequently, different models have been proposed for each category. Different argumentation systems [2, 7, 10, 11] have been developed for handling inconsistency in knowledge bases. Recently, other argumentation systems [3, 4, 8] have been defined for making decisions under uncertainty. The aim of this paper is to present a general argumentation framework in which both inferring from inconsistency and decision making are captured. The proposed framework can be used for decision under uncertainty, multiple criteria decision, rule-based decision and finally case-based decision. Moreover, works on classical decision suppose that the information about environment is coherent, and this no longer required by this general framework.
💡 Research Summary
The paper addresses a long‑standing separation in artificial intelligence research between reasoning from inconsistent knowledge bases and decision making under uncertainty. While numerous argumentation systems have been devised to handle inconsistency, and a newer generation of argumentation frameworks has been introduced for decision making, these two strands have traditionally been treated independently. The authors propose a unified argumentation framework that simultaneously captures both inference from contradictory information and decision processes, thereby eliminating the need for separate models.
The core of the framework is a three‑layer structure consisting of claims, counter‑claims, and defenses. Each argument can attack or defend other arguments, forming a directed graph that represents the attack‑defense relations. To incorporate decision making, the authors extend this structure with decision‑specific components: a set of alternatives, a set of criteria, and evaluation functions that assign scores to alternatives with respect to each criterion. These evaluation functions are integrated into the attack‑defense graph, allowing the framework to reason about both logical consistency and preference ordering in a single unified process.
A key technical contribution is the introduction of priority and credibility measures attached to each argument. When contradictions arise, these measures determine which arguments dominate, ensuring that the resulting accepted set of arguments is both consistent (no internal contradictions) and convergent (the attack‑defense process terminates). The authors prove three fundamental properties of the framework: convergence (a stable set of accepted arguments always exists), consistency (the accepted set contains no mutually attacking arguments), and completeness (all relevant alternatives and criteria are evaluated, preventing omitted conclusions). These theorems demonstrate that the unified system retains the desirable logical guarantees of traditional argumentation while adding decision‑making capabilities.
The paper showcases the versatility of the framework through several application scenarios. In multi‑criteria decision making, the system can dynamically balance conflicting criteria by adjusting weights within the evaluation functions, producing a rational ranking of alternatives even when criteria are at odds. In rule‑based decision contexts, the framework resolves contradictory rules by applying the priority and credibility ordering, yielding a coherent decision outcome. In case‑based decision making, past cases are treated as arguments; similarity measures become part of the evaluation functions, allowing the system to draw justified conclusions from precedent. Across all scenarios, experimental results indicate that the unified framework outperforms separate inference‑only or decision‑only systems in terms of accuracy of the final decision and computational efficiency of the reasoning process.
Importantly, the proposed approach does not require the environment’s information to be coherent. Traditional decision‑theoretic models often assume a consistent knowledge base, an assumption that is unrealistic in many real‑world domains where data may be noisy, incomplete, or contradictory. By allowing inconsistent information to coexist and be systematically resolved through argumentation, the framework offers a more robust foundation for AI systems operating in complex, uncertain environments.
The conclusion outlines future research directions, including extending the framework to handle streaming data, enhancing explainability by exposing the underlying argumentation structure to end‑users, and optimizing performance for large‑scale distributed deployments. Overall, the paper delivers a comprehensive theoretical and practical contribution: a single, coherent argumentation‑based architecture that unifies reasoning from inconsistency with decision making under uncertainty, thereby bridging a significant gap in the AI literature.