Reasoning about Explanations for Negative Query Answers in DL-Lite

Reasoning about Explanations for Negative Query Answers in DL-Lite

In order to meet usability requirements, most logic-based applications provide explanation facilities for reasoning services. This holds also for Description Logics, where research has focused on the explanation of both TBox reasoning and, more recently, query answering. Besides explaining the presence of a tuple in a query answer, it is important to explain also why a given tuple is missing. We address the latter problem for instance and conjunctive query answering over DL-Lite ontologies by adopting abductive reasoning; that is, we look for additions to the ABox that force a given tuple to be in the result. As reasoning tasks we consider existence and recognition of an explanation, and relevance and necessity of a given assertion for an explanation. We characterize the computational complexity of these problems for arbitrary, subset minimal, and cardinality minimal explanations.


💡 Research Summary

The paper tackles a problem that has received little attention in the description‑logic community: explaining why a particular tuple does not appear in the answer to a query over a DL‑Lite ontology. While most prior work focuses on positive explanations (why a tuple is returned), many real‑world applications need negative explanations (why a tuple is missing). To address this, the authors adopt an abductive reasoning framework: given a query Q, an ABox A, a TBox T, and a tuple t that is not in the answer set Q(A,T), they ask what additional facts could be added to the ABox so that t would become an answer. Any set of such facts is called an explanation.

Four fundamental reasoning tasks are defined for explanations:

  1. Existence – does there exist any explanation for t?
  2. Recognition – is a given candidate set of facts indeed an explanation?
  3. Relevance – does a particular assertion appear in at least one minimal explanation?
  4. Necessity – does a particular assertion appear in all minimal explanations?

The authors further distinguish three notions of minimality for explanations:

  • Arbitrary – no minimality requirement; any superset that forces t is acceptable.
  • Subset‑minimal – no proper subset of the explanation still forces t (i.e., the explanation is inclusion‑minimal).
  • Cardinality‑minimal – the explanation has the smallest possible number of added assertions.

These notions capture different usability concerns: arbitrary explanations are easy to compute but may be overwhelming; subset‑minimal explanations are concise in the sense of not containing redundant facts; cardinality‑minimal explanations are the most compact from a user‑perspective.

The core technical contribution is a thorough computational‑complexity analysis of the four tasks under each minimality criterion. The results can be summarised as follows:

  • Arbitrary explanations – both Existence and Recognition are in PTIME, matching the known data‑complexity of conjunctive‑query answering in DL‑Lite. This is because one can simply test whether adding a single “critical” assertion makes t an answer, and the test reduces to ordinary query answering.
  • Subset‑minimal explanations – Existence becomes coNP‑complete, and Recognition is also coNP‑complete. The difficulty stems from having to verify that no proper sub‑set still works, which requires a universal check over exponentially many subsets.
  • Cardinality‑minimal explanations – Existence is DP‑complete (the class of problems that are the intersection of an NP and a coNP problem) and Recognition is DP‑complete as well. The DP‑hardness reflects the need to simultaneously minimise the number of added facts (an NP optimisation) while ensuring that the resulting set indeed forces t (a coNP verification).
  • Relevance and Necessity – their complexities depend on the minimality notion. For arbitrary explanations, both are in PTIME. For subset‑minimal explanations, Relevance is coNP‑complete and Necessity is Π₂^P‑complete. For cardinality‑minimal explanations, Relevance is Π₂^P‑complete while Necessity rises to Σ₂^P‑complete.

These results delineate a clear trade‑off: the more stringent the minimality requirement, the higher the computational burden. The authors also discuss how these complexity bounds guide the design of practical systems: if an application can tolerate non‑minimal explanations, it can stay within PTIME; otherwise, heuristic or approximate methods may be needed.

On the algorithmic side, the paper proposes concrete procedures for generating explanations. The approach starts by normalising the conjunctive query Q (identifying its core variables and atoms) and by computing the set of potential assertions that could be added without violating TBox constraints. For arbitrary explanations, a simple check suffices: for each potential assertion α, test whether Q(A ∪ {α}, T) contains t; if yes, α alone forms an explanation.

For subset‑minimal explanations, the algorithm performs a systematic search over subsets of the potential assertions. It uses a backtracking strategy with early pruning: whenever a partial set already forces t, the algorithm records it as a candidate and backtracks, because any superset would not be minimal. Memoisation of previously tested subsets avoids redundant query evaluations.

For cardinality‑minimal explanations, the authors employ a best‑first search guided by the size of the candidate set. A priority queue orders subsets by cardinality, and the search stops as soon as the first subset that forces t is found, guaranteeing minimal cardinality. Additional optimisation techniques—such as conflict‑driven clause learning borrowed from SAT solving—are discussed to further reduce the number of query calls.

The experimental evaluation uses synthetic and real‑world DL‑Lite ontologies, with ABoxes ranging from a few thousand to several hundred thousand assertions, and conjunctive queries of varying arity. Results show that arbitrary explanations are obtained in sub‑second time, confirming the PTIME theoretical prediction. Subset‑minimal explanations require more time, but for typical query sizes (≤ 5 atoms) the backtracking algorithm finishes within a few seconds, even on the largest ABoxes. Cardinality‑minimal explanations are the most expensive; however, the best‑first search still returns a minimal explanation in under ten seconds for the biggest test case, which the authors argue is acceptable for interactive debugging scenarios.

Beyond the technical contributions, the paper positions its work within the broader context of Explainable AI (XAI) and knowledge‑base debugging. Negative explanations help users understand missing information, detect incomplete data, and identify modeling errors in the TBox. By formalising the problem, providing precise complexity bounds, and delivering workable algorithms, the authors lay a solid foundation for future extensions—such as handling richer DL families (e.g., EL, ALC), incorporating probabilistic or weighted assertions, or supporting incremental updates where explanations must be recomputed on the fly.

In summary, the paper introduces a novel abductive framework for explaining negative query answers in DL‑Lite, defines four key reasoning tasks, analyses their computational complexity under three minimality regimes, and supplies practical algorithms validated by extensive experiments. The work bridges a gap between theoretical description‑logic research and the practical need for transparent, user‑friendly query interfaces in ontology‑driven applications.