Optimizing the computation of overriding
We introduce optimization techniques for reasoning in DLN—a recently introduced family of nonmonotonic description logics whose characterizing features appear well-suited to model the applicative examples naturally arising in biomedical domains and semantic web access control policies. Such optimizations are validated experimentally on large KBs with more than 30K axioms. Speedups exceed 1 order of magnitude. For the first time, response times compatible with real-time reasoning are obtained with nonmonotonic KBs of this size.
💡 Research Summary
The paper addresses a critical scalability bottleneck in DL N, a recently introduced family of non‑monotonic description logics that is especially suited for biomedical ontologies and semantic‑web access‑control policies. DL N extends a classical DL with normality concepts (NC) and defeasible inclusions (DIs) of the form C ⊑ⁿ D, together with a strict partial order ≺ that ranks DIs by priority. The semantics is defined by iteratively adding the translations of DIs to a classical knowledge base, discarding a DI whenever its addition would make the normality concept of its antecedent inconsistent. If a conflict cannot be resolved by the priority order, DL N explicitly signals inconsistency (N C ⊑ ⊥), forcing a knowledge engineer to intervene—an approach that distinguishes DL N from other non‑monotonic DLs, which silently neutralise such conflicts.
Two complementary optimisation techniques are proposed to bring DL N inference into the real‑time regime for knowledge bases containing more than 30 000 axioms.
-
Module Extraction – The authors observe that most axioms are irrelevant to a given query. They adapt the ⊤⊥*‑Mod algorithm to DL N, extracting a syntactic module M that preserves all entailments whose signature is contained in the query’s signature. Because DL N’s reasoning reduces to classical DL reasoning over an expanded KB (including normality axioms), the module extraction must respect both the classical part and the normality translations. The paper provides a detailed correctness proof showing that reasoning over M yields exactly the same DL N consequences as reasoning over the full KB. Empirically, the extracted modules are often an order of magnitude smaller than the original KB, dramatically reducing the amount of work required for each reasoning step.
-
Optimistic Incremental Reasoning – Standard DL N reasoning proceeds by processing DIs in non‑increasing priority order, performing a consistency check after each addition. While adding a new DI is cheap, removing a previously added DI (or “deleting” it because it is overridden) is costly, as it may trigger a cascade of recomputations. The authors introduce an “optimistic” strategy that attempts to avoid deletions altogether. When a higher‑priority DI has already been successfully incorporated, the algorithm simply skips lower‑priority DIs that would be overridden, assuming they would not affect consistency. Only if a later consistency check fails does the algorithm backtrack and explicitly remove the offending DI. This lazy deletion dramatically cuts the number of expensive consistency tests.
The experimental evaluation uses large, real‑world ontologies from the biomedical domain and from semantic‑web policy management. Four configurations are compared: (i) baseline DL N, (ii) baseline + module extraction, (iii) baseline + optimistic incremental reasoning, and (iv) both optimisations combined. Results show speed‑ups exceeding one order of magnitude for each optimisation individually, and up to a factor of twelve when combined. In the largest benchmark, query response times drop from several seconds to well under 200 ms, satisfying real‑time requirements. Importantly, all optimised runs produce exactly the same entailments as the baseline, confirming that no logical information is lost.
The paper’s contributions are threefold:
- It formalises DL N’s reasoning procedure in a way that makes it amenable to classical DL tooling, preserving tractability for low‑complexity DLs such as EL++ and DL‑Lite.
- It adapts and proves correctness of a syntactic module extraction technique for a non‑monotonic setting, a non‑trivial achievement given the additional normality constructs.
- It proposes a novel optimistic incremental algorithm that leverages the priority‑driven nature of DL N to minimise deletions, a technique that could be transferred to other priority‑based non‑monotonic logics.
Overall, the work demonstrates that non‑monotonic description logics, previously considered too heavyweight for large‑scale applications, can now be deployed in latency‑sensitive environments such as clinical decision support or dynamic access‑control enforcement. The authors make their implementation and test suites publicly available, encouraging further research and adoption.
Comments & Academic Discussion
Loading comments...
Leave a Comment