Capability Safety as Datalog: A Foundational Equivalence
We prove that capability safety admits an exact representation as propositional Datalog evaluation (Datalogprop: the monadic, ground, function-free fragment of first-order logic), enabling the transfer of algorithmic and structural results unavailabl…
Authors: Cosimo Spera
Capabilit y Safet y as Datalog: A F oundational Equiv alence Cosimo Sp era ∗ Marc h 2026 Abstract W e pro ve that capability safety admits an exact represen tation as prop ositional Datalog ev aluation ( Datalog prop : the monadic, ground, function-free fragment of first-order logic), enabling the transfer of algorithmic and structural results una v ailable in the nativ e formulation. This addresses t w o structural limitations of the capability hypergraph framew ork of Sp era [2026]: the absence of efficient incremen tal maintenance, and the absence of a decision pro cedure for audit surface con tainment. The equiv alence is tight: capability h yp ergraphs corresp ond to exactly this fragment, no more. Algorithmic consequences. Under this identification, the safe goal disco v ery map G F ( A ) is a stratified Datalog view. This structural fact yields the Lo calit y Gap Theorem , whic h establishes the first structural separation b et w een global recomputation and lo cal capability up dates: DRed maintenance costs O ( | ∆ | · ( n + mk )) per up date versus O ( | V | · ( n + mk )) for recomputation; an explicit hard family witnesses an Ω( n ) asymptotic gap; and an AND-insp ection low er b ound—prov ed via indistinguishable instance pairs in the oracle mo del—sho ws an y correct algorithm must prob e all k + 1 atoms in Φ( u ) = S u ∪ { v u } to v erify rule activ ation. T ogether, these yield the first pro v able separation b etw een global and lo cal safety reasoning in agentic systems. Structural and seman tic consequences. Structurally: audit surface containmen t G F ( A ) ⊆ G F ( A ′ ) is decidable in polynomial time via Datalog prop query con tainment, giving the first decision pro cedure for this problem; non-compositionality of safet y is the non- mo dularit y of Datalog deriv ations, pro viding a structural explanation rather than a case analysis. Seman tically: deriv ation certificates are wh y-prov enance witnesses [Green et al., 2007], with a comm utative semiring algebra enabling compression, comp osition, and uniform v alidation. Each op en problem of Sp era [2026] maps to a known op en problem in Datalog theory , enabling direct transfer of thirty years of partial results. Scop e. The syntactic connection betw een AND-h yp eredges and Horn clauses is classical; the con tribution is to mak e it a tight formal equiv alence and to derive consequences from it that are new. This gives the capability safety research agenda direct access to thirt y y ears of Datalog results, without requiring new algorithms or new complexity analysis for problems already solved in database theory . Keyw ords: Datalog; capability hypergraphs; AI safety; incremen tal view maintenance; pro v enance semirings; oracle lo wer b ounds; agen tic systems. Con ten ts 1 In tro duction 3 1.1 A New Computational Lens for Agentic Safety . . . . . . . . . . . . . . . . . . . 3 1.2 What is Datalog? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 The Prop ositional F ragment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Con tributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 ∗ Corresp onding author. Minerv a CQ, 114 Lester Ln, Los Gatos, CA 95032. cosimo@minervacq.com 1 2 Related W ork 4 3 F ormal Bac kground 6 3.1 Capabilit y Hyp ergraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Prop ositional Datalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Pro venance Semirings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 The Enco ding Theorem 7 4.1 F rom Capabilit y Hypergraphs to Datalog . . . . . . . . . . . . . . . . . . . . . . 7 4.2 F rom Datalog to Capability Hyp ergraphs . . . . . . . . . . . . . . . . . . . . . . 8 5 The Tight Expressivity Theorem 9 6 The Prov enance Theorem 10 7 The Witness Corresp ondence 12 8 Op en Problem Corresp ondence 12 9 Non-Comp ositionalit y as a Datalog Theorem 13 10 Empirical Grounding 14 11 Capabilit y Safety Admits Efficien t Incremental Maintenance 14 12 Discussion and F uture W ork 20 12.1 What the Equiv alence Settles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 12.2 What the Equiv alence Op ens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 12.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A Empirical Detail: Deriv ation T rees and Aggregate Statistics 21 A.1 A Representativ e T ra jectory as a Datalog Deriv ation . . . . . . . . . . . . . . . . 21 A.2 A Real AND-Violation as Datalog Non-Mo dularit y . . . . . . . . . . . . . . . . . 22 A.3 Aggregate Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 1 In tro duction 1.1 A New Computational Lens for Agentic Safet y The safety of multi-agen t AI systems is ultimately a computational problem: giv en a set of capabilities agen ts currently hold and a set of forbidden states, determine what the agents can collectively reac h, how efficien tly that determination can be main tained as the system ev olves, and what certificates of safet y can be pro duced for regulators and auditors. Spera [2026] formalised this as a capabilit y hypergraph problem and prov ed non-comp ositionality of safet y , but the computational tools a v ailable within the h yp ergraph framew ork are limited: closure is computed by a fixed-p oin t iteration, and ev ery structural c hange requires recomputation from scratc h. The resolution. W e resolv e b oth limitations by pro ving that capabilit y safet y is exactly ev aluation in prop ositional Datalog. W e prov e this as a formal equiv alence, with explicit p olynomial-time enco dings in b oth directions. The syntactic connection betw een AND-hyperedges and Horn clauses has b een noted b efore; what is new is making it exact and using it to transfer algorithmic results that ha ve no counterpart in the hypergraph framew ork. Wh y this is not trivial. Syn tactic corresp ondences b etw een hypergraphs and Horn clauses are known, but they do not imply seman tic or algorithmic equiv alence. In particular, they do not preserve safety closure, minimal unsafe sets, or supp ort incremental maintenance. The con tribution of this work is to establish an exact equiv alence with these pr op erties —sho wing that the capability h yp ergraph closure, the structure of B ( F ), and the safe audit surface G F ( A ) all ha ve precise Datalog prop coun terparts—and to use this to deriv e new structural and algorithmic results, chiefly the Lo cality Gap Theorem, that are not accessible in the h yp ergraph framew ork alone. Algorithmic consequences. The primary pa yoff is the Lo calit y Gap Theorem (Theo- rem 11.3). In Spera [2026], every hyperedge up date requires recomputing the safe audit surface G F ( A ) from scratc h at cost O ( | V | · ( n + mk )). The Datalog prop iden tity shows G F ( A ) is a view, making DRed incremental maintenance av ailable at cost O ( | ∆ | · ( n + mk )) p er up date. An explicit hard family witnesses an Ω( n ) asymptotic gap. An AND-insp ection low er b ound—pro ved via indistinguishable instance pairs under an oracle mo del, inv oking Y ao’s minimax principle [Y ao, 1977]—sho ws any correct algorithm must probe all k + 1 atoms in Φ( u ) = S u ∪ { v u } to v erify rule activ ation. The Ω( k ) b ound reflects the AND-condition structure directly: it is the same conjunctive precondition structure that mak es safet y non-compositional. Structural and seman tic consequences. The identit y gives t wo further results una v ailable in the hypergraph framework. Deciding whether one audit surface is con tained in another, G F ( A ) ⊆ G F ( A ′ ), reduces to Datalog prop query containmen t, giving the first p olynomial-time decision pro cedure for this problem. Deriv ation certificates are why-pro venance witnesses [Green et al., 2007], with a commutativ e semiring structure enabling compression, composition, and uniform v alidation within a single algebraic framew ork. Each op en problem of Sp era [2026] admits a reduction to a kno wn op en problem in Datalog learning theory , probabilistic Datalog, or view update, enabling direct transfer of thirt y years of partial results. This r esult shows that c ap ability safety do es not r e quir e a new c omputational the ory, but c an b e analyse d within an existing, wel l-understo o d lo gic al fr amework—with imme diate ac c ess to its algorithms, c omplexity r esults, and op en pr oblems. 1.2 What is Datalog? Datalog is the fragmen t of first-order logic consisting of rules of the form B 1 ( x 1 ) ∧ · · · ∧ B k ( x k ) ⇒ H ( y ), where all v ariables are univ ersally quan tified, no function sym b ols app ear, and H is a 3 single relational atom. Giv en a finite database D , the seman tics of a Datalog program Π o ver D is the least fixed p oint of the immediate consequence op erator T Π ( I ) = D ∪ { H σ : ( B 1 ∧ · · · ∧ B k ⇒ H ) ∈ Π , B i σ ∈ I ∀ i } , computable in p olynomial time in | D | [Abiteb oul et al., 1995, Ceri et al., 1989]. 1.3 The Prop ositional F ragmen t When all predicates are unary and all terms are constants, Datalog reduces to prop ositional Horn clause logic. Rules ha ve the form p 1 ∧ · · · ∧ p k ⇒ q where p i , q are propositional atoms. W e denote this fragment Datalog prop . The capability h yp ergraph framework liv es precisely in this fragment. Eac h capability v ∈ V is a propositional atom has ( v ); each h yp eredge ( S, { v } ) is a prop ositional Datalog rule; and the closure cl( A ) is the least mo del of the corresp onding Datalog program ov er { has ( a ) : a ∈ A } . 1.4 Con tributions 1. Lo calit y Gap Theorem (Theorems 11.1 and 11.3): the safe audit surface G F ( A ) is a stratified Datalog prop view, yielding the first prov able separation b etw een global and local safet y reasoning in agen tic systems. DRed main tenance costs O ( | ∆ | · ( n + mk )) v ersus O ( | V | · ( n + mk )) for recomputation; an explicit hard family witnesses an Ω( n ) asymptotic gap; an AND-insp ection lo wer bound (pro ved via an oracle model and Y ao’s minimax principle) sho ws an y correct algorithm m ust prob e all k + 1 atoms in Φ( u ) = S u ∪ { v u } to verify rule activ ation. 2. The Enco ding Theorem (Theorems 4.3 and 4.5): a tigh t, tw o-directional equiv alence b et ween capability h yp ergraphs and Datalog prop programs, with explicit p olynomial-time iso- morphisms in both directions preserving closure, safety , and B ( F ). The scope is prop ositional throughout; a parity-query separation (Theorem 5.1) prov es the fragmen t cannot b e enlarged. 3. Con tainment Decidabilit y : audit surface containmen t G F ( A ) ⊆ G F ( A ′ ) reduces to Datalog prop query containmen t, giving the first p olynomial-time decision pro cedure for this problem. 4. Pro venance Algebra (Theorem 6.2): deriv ation certificates are why-pro v enance witnesses [Green et al., 2007] with a commutativ e semiring structure; B ( F ) equals the minimal witness an tichain (Theorem 7.1). 5. Op en Problem T ransfer (Section 8): each op en problem of Sp era [2026] admits a reduction to a kno wn op en problem in Datalog theory , imp orting thirt y y ears of partial results. 6. Empirical Illustration (Section 10): the 900-tra jectory corpus of Spe ra [2026] is re-expressed in Datalog v o cabulary as illustration; detailed deriv ation trees app ear in App endix A. 2 Related W ork This pap er sits at the intersection of three b o dies of literature: Datalog theory and finite mo del theory , AI safety formalisms, and logic-based learning theory . W e p osition the contribution relativ e to each. Datalog and descriptive complexity . The foundational result underlying this pap er is Immerman’s [1986] theorem that Datalog prop captures p olynomial-time query ev aluation ov er ordered structures, and the subsequent w ork of F agin [1974] on the relationship b etw een Datalog and fixed-point logics. Our Corollary 5.2 re-deriv es the complexit y results of Spera [2026] from this 4 landscap e, strengthening them by situating them in the descriptiv e complexit y c haracterisation of P rather than ad-ho c circuit reductions. The coNP -completeness of computing B ( F ) (Corollary 5.2 P art 3) follows the classical result of Eiter and Gottlob [1995] on minimal transversal en umeration, whic h is one of the central op en problems in database theory . The connection we establish — that B ( F ) is exactly the minimal-witness an tic hain for a monotone Datalog query — places this op en problem in a w ell-studied con text with a 30-year literature of partial results. Datalog prov enance and semirings. The prov enance semiring framework of Green et al. [2007] is the direct foundation for our Prov enance Theorem (Theorem 6.2). Green et al. [2007] sho wed that Datalog deriv ations can b e annotated with elemen ts of any comm utative semiring, yielding a uniform algebraic account of wh y-prov enance, lineage, and uncertaint y propagation. Our con tribution is to show that the deriv ation certificates of Sp era [2026] are exactly the wh y-prov enance witnesses under the enco ding, giving them a canonical algebraic structure that w as not previously recognised. Prior work on Datalog prov enance has fo cused on relational queries [Abiteb oul et al., 1995]; this pap er extends the connection to the AI safet y setting. Logic-based learning and Horn clause learning. Cohen [1995] studied the learnability of Horn programs from positive and negativ e examples, establishing P AC-learning b ounds for restricted Horn clause classes. His w ork is the direct precursor to the op en question w e raise in Section 8: whether the V C-dimension b ound of O ( n 2 k ) from Sp era [2026] can be tigh tened for structured capabilit y h yp ergraph families. Dalmau et al. [2002] studied Datalog learning in the context of constrain t satisfaction and b ounded treewidth, pro viding Rademac her-complexit y b ounds that are substantially tigh ter than the Sauer–Shelah b ound for programs with restricted dep endency structure. The capabilit y hypergraph setting — t ypically sparse, low fan-in, and tree-structured — is precisely the regime where these tighter b ounds apply . AI safet y formalisms. The broader AI safety literature has prop osed several formal frame- w orks for reasoning about agen t b eha viour. Leike et al. [2017] defines safety in terms of reward functions o v er en vironment histories; this is orthogonal to the capability-composition setting, whic h is concerned with structural reachabilit y rather than optimisation ob jectiv es. Con tract- based design [Ben veniste et al., 2018] and assume-guarantee reasoning [Jones, 1983] v erify that a fixe d comp osition satisfies a pre-specified prop erty; the capabilit y h yp ergraph framework c haracterises the set of al l properties a dynamically gro wing capabilit y set can ev er reach, whic h is a strictly more demanding problem. The closest formalisms in the planning literature — P etri net reachabilit y [Murata, 1989] and AND/OR planning [Erol et al., 1994] — enco de conjunctive preconditions, but require search rather than a single fixed-point computation, and provide neither the non-comp ositionality theorem nor a p olynomial-time certifiable audit surface. The presen t pap er’s contribution is not to replace these formalisms but to precisely characterise the capabilit y h yp ergraph framew ork’s position in the expressivity landscap e by iden tifying it with Datalog prop . View up date and incremental main tenance. The view up date problem for Datalog — giv en a Datalog view and a desired change to the view, compute the minimal change to the base data — is a classical op en problem [Abiteb oul et al., 1995]. Section 8 shows that the adv ersarial robustness problem of Sp era [2026] ( MinUnsafeAdd ) is an instance of this problem, placing its NP -hardness for b ≥ 2 in the context of known hardness results for general view up date. The DRed algorithm [Abiteb oul et al., 1995] and the semi-na ¨ ıv e ev aluation strategy provide the incremen tal main tenance mac hinery that Theorem 11.1 applies to the safe audit surface. 5 3 F ormal Background 3.1 Capabilit y Hyp ergraphs W e summarise the relev ant definitions from Sp era [2026] in full, so that the present pap er is self-con tained for readers with a Datalog bac kground who ma y not hav e read the source pap er. Definition 3.1 (Capabilit y Hyp ergraph) . A capability h yp ergraph is a p air H = ( V , F ) wher e V is a finite set of capability nodes and F is a finite set of hyperarcs , e ach of the form e = ( S, T ) with S, T ⊆ V and S ∩ T = ∅ . The set S is the tail (joint pr e c onditions) and T is the head (simultane ous effe cts). The hyp er ar c fir es when al l elements of S ar e simultane ously pr esent, pr o ducing al l elements of T . Dir e cte d gr aphs ar e the sp e cial c ase | S | = | T | = 1 . Definition 3.2 (Closure Op erator) . L et H = ( V , F ) and A ⊆ V . The closure cl H ( A ) is the smal lest set C ⊆ V satisfying: (i) A ⊆ C (extensivity); and (ii) ∀ ( S, T ) ∈ F : S ⊆ C ⇒ T ⊆ C (close d under firing). It is c ompute d by the fixe d-p oint iter ation C 0 = A , C i +1 = C i ∪ { T | ( S, T ) ∈ F , S ⊆ C i } , which terminates in at most | V | steps. The op er ator satisfies extensivity, monotonicity ( A ⊆ B ⇒ cl ( A ) ⊆ cl ( B ) ), and idemp otenc e. The worklist implementation runs in O ( n + mk ) wher e n = | V | , m = |F | , and k = max e ∈F | S ( e ) | . Definition 3.3 (Safe Region and Minimal Unsafe An tichain) . L et F ⊆ V b e a forbidden set . A c onfigur ation A ⊆ V is F -safe if cl H ( A ) ∩ F = ∅ . The safe region is R ( F ) = { A ⊆ V : cl H ( A ) ∩ F = ∅} . The safe r e gion is a lower set (downwar d-close d) in (2 V , ⊆ ) : if A ∈ R ( F ) and B ⊆ A then B ∈ R ( F ) [Sp er a, 2026, The or em 9.4]. Conse quently its c omplement R ( F ) is an upp er set, whose antic hain of minimal elements is the minimal unsafe antic hain : B ( F ) = { A ⊆ V : A / ∈ R ( F ) , ∀ a ∈ A, A \ { a } ∈ R ( F ) } . By Dickson ’s lemma, B ( F ) is finite. Its key c omputational pr op erty [Sp er a, 2026, The or em 9.5]: de ciding B ∈ B ( F ) is coNP -c omplete. Definition 3.4 (Emergent Capabilities and Near-Miss F rontier) . Fix H = ( V , F ) , A ⊆ V , and let C = cl H ( A ) . L et cl 1 ( A ) denote the closur e under the sub-hyp er gr aph of singleton-tail hyp er ar cs only. Then: • Emergen t capabilities: Emg ( A ) = { v ∈ C \ A : v / ∈ cl 1 ( A ) } . These ar e c ap abilities r e achable fr om A via c onjunctive (AND) hyp er ar cs but not via any chain of singleton-tail ar cs alone. • Near-miss frontier: NMF F ( A ) = { µ ( e ) : e ∈ ∂ ( A ) , µ ( e ) / ∈ F , cl ( A ∪ µ ( e )) ∩ F = ∅} , wher e the b oundary ∂ ( A ) is the set of hyp er ar cs e = ( S, { v } ) with S ⊆ C and | S \ C | = 1 , and µ ( e ) = S \ C is the single missing pr e c ondition. Definition 3.5 (Safe Audit Surface) . The safe goal discov ery map of Sp er a [2026, The or em 10.1] is: G F ( A ) = Emg( A ) \ F , NMF F ( A ) , top - k v ∈ V \ (cl( A ) ∪ F ) γ F ( v , A ) , wher e the mar ginal gain γ F ( v , A ) = | cl ( A ∪ { v } ) \ ( cl ( A ) ∪ F ) | . It is p olynomial-time c omputable in O ( | V | · ( n + mk )) and pr ovides derivation c ertific ates for every element of Emg ( A ) \ F . 3.2 Prop ositional Datalog Definition 3.6 (Prop ositional Datalog Program) . A prop ositional Datalog program is a p air Π = ( R, D 0 ) wher e R is a finite set of pr op ositional Horn rules p 1 ∧ · · · ∧ p k ⇒ q ( k ≥ 0 ), and D 0 is a finite set of gr ound facts (the extensional datab ase, EDB). The least mo del of Π is Π( D 0 ) = T Π ↑ ω , c ompute d by iter ating the imme diate c onse quenc e op er ator to its le ast fixe d p oint. 6 Definition 3.7 (Datalog Query and Minimal Witness) . A prop ositional Datalog query is a p air (Π , q ) wher e q is a distinguishe d atom. The query evaluates to true over datab ase D iff q ∈ Π( D ) . A minimal witness for (Π , q ) over a datab ase family { D A : A ⊆ V } is a set W ⊆ V such that q ∈ Π( D W ) and q / ∈ Π( D W \{ w } ) for every w ∈ W . 3.3 Pro venance Semirings Green et al. [2007] sho wed that Datalog deriv ations can b e annotated with elemen ts of a comm u- tativ e semiring ( K, + , × , 0 , 1), yielding a pr ovenanc e p olynomial recording whic h com binations of base facts con tributed to eac h derived fact. The why-pr ovenanc e semiring (2 2 V , ∪ , ▷ ◁, ∅ , {∅} ) records for each deriv ed atom the set of minimal EDB subsets sufficien t to deriv e it, where ▷ ◁ denotes pairwise set union. 4 The Enco ding Theorem Intuition. Capabilit y h yp ergraphs enco de conjunction through hyperedges: a hyperedge ( S, { v } ) fires precisely when al l preconditions in S are simultaneously present. Datalog rules enco de the same structure through Horn clauses: V s ∈ S has ( s ) ⇒ has ( v ). The k ey difficulty is not represen tational but semantic : ensuring that safety closure, minimal witnesses, and incremen tal up date b ehaviour are preserved exactly under the mapping. The theorems in this section sho w that the corresp ondence is not merely syntactic but complete in this stronger sense—closure maps to least-model computation, B ( F ) maps to the minimal witness antic hain, and safe audit surfaces map to stratified views. 4.1 F rom Capability Hyp ergraphs to Datalog Definition 4.1 (The CapHyp → Datalog prop Enco ding) . Given H = ( V , F ) and forbidden set F ⊆ V , define the Datalo g pr o gr am Π H = ( R H , ∅ ) as fol lows. F or e ach hyp er e dge e = ( S, { v } ) ∈ F (r estricting to singleton he ads without loss of gener ality via he ad splitting): has ( s 1 ) ∧ · · · ∧ has ( s k ) ⇒ has ( v ) , S = { s 1 , . . . , s k } . F or e ach f ∈ F : has ( f ) ⇒ forbidden . Given initial c onfigur ation A ⊆ V , the EDB is D A = { has ( a ) : a ∈ A } . Remark 4.2 (Stratified Negation) . The safety pr e dic ate ¬ fo rbidden ⇒ safe lies outside standar d Datalo g but is expr essible in str atifie d Datalo g ( Datalo g ¬ s ) , whose le ast str atifie d mo del is unique and p olynomial-time c omputable [Abiteb oul et al., 1995]. The main e quivalenc e works with the pur e c ap ability closur e pr o gr am (without the safety pr e dic ate); the safety pr e dic ate is a str atifie d extension c ompute d in the str atum ab ove forbidden . Theorem 4.3 (Enco ding Correctness: CapHyp → Datalog prop ) . L et H = ( V , F ) b e a c ap ability hyp er gr aph and A ⊆ V . Under Definition 4.1: 1. cl H ( A ) = { v ∈ V : has ( v ) ∈ Π H ( D A ) } . 2. A ∈ R ( F ) iff fo rbidden / ∈ Π H ( D A ) . 3. B ∈ B ( F ) iff B is a minimal witness for the query (Π H , forbidden ) over the datab ase family { D A : A ⊆ V } . Pr o of. P art (1). By Definition 4.1, the capabilit y rules of Π H are exactly { has ( s 1 ) ∧ · · · ∧ has ( s k ) ⇒ has ( v ) : ( S, { v } ) ∈ F , S = { s 1 , . . . , s k }} . The immediate consequence op erator of Π H o ver D A is T Π H ( I ) = D A ∪ has ( v ) : ∃ ( S, { v } ) ∈ F , has ( s ) ∈ I ∀ s ∈ S . 7 This is precisely the closure iteration C 0 = A , C i +1 = C i ∪ { v : ∃ ( S, { v } ) , S ⊆ C i } of Sp era [2026]. By the v an Emden–Kow alski theorem [v an Emden and Ko walski, 1976], T Π H ↑ ω = Π H ( D A ) is the least Herbrand mo del of the Horn clause program, giving cl H ( A ) = { v : has ( v ) ∈ Π H ( D A ) } . P art (2). By P art (1) and the safet y rules: A ∈ R ( F ) iff cl H ( A ) ∩ F = ∅ iff no has ( f ) for f ∈ F is deriv ed iff the rule has ( f ) ⇒ fo rbidden nev er fires for an y f ∈ F iff fo rbidden / ∈ Π H ( D A ). P art (3). W e pro ve eac h direction separately . F orwar d dir e ction ( B ∈ B ( F ) ⇒ B is a minimal witness). Suppose B ∈ B ( F ). Then B / ∈ R ( F ), so b y P art (2), fo rbidden ∈ Π H ( D B ). F or any b ∈ B , since B \ { b } ∈ R ( F ), Part (2) giv es fo rbidden / ∈ Π H ( D B \{ b } ). Therefore B is a minimal EDB subset deriving fo rbidden , i.e., B is a minimal witness. R everse dir e ction (every minimal witness is in B ( F )). Let W ⊆ V b e a minimal witness for (Π H , forbidden ), i.e., fo rbidden ∈ Π H ( D W ) and fo rbidden / ∈ Π H ( D W \{ w } ) for ev ery w ∈ W . By P art (2), W / ∈ R ( F ) and W \ { w } ∈ R ( F ) for ev ery w ∈ W . Therefore W ∈ B ( F ) by definition. The bije ction is exact. The tw o directions ab ov e give B ( F ) ⊆ { minimal witnesses } and { minimal witnesses } ⊆ B ( F ), so the sets are equal. W e complete the pro of by v erifying the why-pro venance c haracterisation. In the why- pro venance semiring, the pro venance of fo rbidden in Π H ( D A ) is the set of all minimal W ⊆ EDB suc h that fo rbidden is deriv able from W alone. W e show b y induction on deriv ation depth that the minimal suc h W are exactly the elements of B ( F ). Base c ase. If fo rbidden is deriv able from D A in one step, then some f ∈ F satisfies has ( f ) ∈ D A , i.e., f ∈ A . The minimal witness is { f } ⊆ A , and indeed { f } ∈ B ( F ) (since cl ( { f } ) ∋ f ∈ F while cl( ∅ ) ∩ F = ∅ ). Inductive step. Supp ose fo rbidden is derived at depth d > 1. Then there exists f ∈ F suc h that has ( f ) is deriv ed at depth d − 1. By the induction h yp othesis applied to the sub-deriv ation of has ( f ), the minimal EDB sets from which f is deriv able are exactly the minimal capabilit y sets M ⊆ V with f ∈ cl ( M ). A minimal suc h M with f ∈ F is b y definition an element of B ( F ). Con versely , every B ∈ B ( F ) derives some f ∈ F in this wa y . Therefore the why-pro v enance of fo rbidden equals B ( F ), completing the pro of. 4.2 F rom Datalog to Capability Hyp ergraphs Definition 4.4 (The Datalog prop → CapHyp Enco ding) . Given a pr op ositional Datalo g pr o gr am Π = ( R, ∅ ) , define the c ap ability hyp er gr aph H Π = ( V Π , F Π ) as fol lows. V Π is the set of al l pr op ositional atoms app e aring in Π . F or e ach rule p 1 ∧ · · · ∧ p k ⇒ q ∈ R : add hyp er e dge ( { p 1 , . . . , p k } , { q } ) to F Π . Given datab ase D , the initial c onfigur ation is A D = { p : p ∈ D } . Theorem 4.5 (Enco ding Correctness: Datalog prop → CapHyp ) . L et Π = ( R, ∅ ) b e a pr op osi- tional Datalo g pr o gr am and D a datab ase. Under Definition 4.4: 1. Π( D ) = cl H Π ( A D ) . 2. The enc o dings CapHyp → Datalo g prop and Datalo g prop → CapHyp ar e mutual ly inverse up to the fol lowing explicit isomorphisms. • F or the r ound-trip H 7→ Π H 7→ H Π H : the isomorphism φ H : H Π H ∼ − − → H is the identity on V , with the has ( · ) wr app er stripp e d fr om atom names. F ormal ly, φ H ( v ) = v for al l v ∈ V , and for e ach hyp er ar c ( { has ( s 1 ) , . . . , has ( s k ) } , { has ( v ) } ) ∈ F H Π H , φ H maps it to ( S, { v } ) ∈ F with S = { s 1 , . . . , s k } . • F or the r ound-trip Π 7→ H Π 7→ Π H Π : the isomorphism ψ Π : Π H Π ∼ − − → Π is the identity on atoms, with the has ( · ) wr app er adde d. Both φ H and ψ Π ar e structur e-pr eserving bije ctions that c ommute with the closur e op er ators. In b oth c ases closur e, safety, and B ( F ) ar e pr eserve d exactly. 8 Pr o of. Part (1). The immediate consequence operator of Π ov er D is T Π ( I ) = D ∪ { q : ( p 1 ∧ · · · ∧ p k ⇒ q ) ∈ R, p i ∈ I ∀ i } . Under Definition 4.4, the closure iteration of H Π from A D is C 0 = A D = D , C i +1 = C i ∪ { q : ∃ ( { p 1 , . . . , p k } , { q } ) ∈ F Π , p j ∈ C i ∀ j } . These are the same iteration. By the v an Emden–Ko walski theorem, T Π ↑ ω = Π( D ), so Π( D ) = cl H Π ( A D ). P art (2). W e v erify both round-trips. R ound-trip H 7→ Π H 7→ H Π H . Let H = ( V , F ). By Definition 4.1, Π H has one capabilit y rule has ( s 1 ) ∧ · · · ∧ has ( s k ) ⇒ has ( v ) for each ( S, { v } ) ∈ F with S = { s 1 , . . . , s k } . Applying Definition 4.4 to Π H (stripping the has ( · ) wrapp er, which is a pure renaming): V H Π H = V and F H Π H = F . So H Π H ∼ = H . By Theorem 4.3(1) and Theorem 4.5(1): for every A ⊆ V , cl H Π H ( A D ) = Π H ( D A ) = cl H ( A ) , confirming that the round-trip preserves closure. R ound-trip Π 7→ H Π 7→ Π H Π . Let Π = ( R, ∅ ). By Definitions 4.4 and 4.1, Π H Π has one rule p er h yp eredge of H Π , whic h corresp onds bijectively to each rule of R (again up to the has ( · ) renaming). So Π H Π ∼ = Π as programs. By Theorem 4.5(1) and Theorem 4.3(1): for every database D , Π H Π ( D ) = cl H Π ( A D ) = Π( D ) , confirming that the round-trip preserves the least mo del for ev ery database. In b oth cases the enco dings are in verse bijections on program/hypergraph structure and preserv e all relev ant seman tics (closure, safety , and by P art (3) of Theorem 4.3, the minimal unsafe antic hain). 5 The Tight Expressivit y Theorem Theorem 5.1 (Tight Expressivit y) . The c ap ability hyp er gr aph fr amework c aptur es exactly the class of Bo ole an queries over pr op ositional datab ases expr essible in Datalo g prop . F ormal ly: 1. (Completeness) Every Datalo g prop query c an b e expr esse d as a c ap ability hyp er gr aph safety query. 2. (Soundness) Every c ap ability hyp er gr aph safety query c an b e expr esse d as a Datalo g prop query. 3. (Tightness) Ther e exist Bo ole an queries over pr op ositional datab ases that ar e not expr ess- ible as c ap ability hyp er gr aph safety queries (e quivalently, not expr essible in Datalo g prop ). Pr o of. P arts (1) and (2) follo w immediately from Theorems 4.3 and 4.5: the translations in b oth directions are p olynomial-time and preserve semantics, so the t w o classes of queries coincide. P art (3): the tigh tness separation. W e exhibit a Bo olean query o v er prop ositional databases not expressible in Datalog prop , hence not as a capability hypergraph safety query . The p arity query. F or a propositional database D and a fixed finite universe U of atoms, define P arity ( D ) = 1 [ | D ∩ U | is ev en ]. W e show this is not expressible in Datalog prop b y a t wo-step argument. 9 Step 1: c ap ability safety queries ar e monotone. Every capabilit y h yp ergraph safet y query q H ( A ) := 1 [ A / ∈ R ( F )] is monotone : if A / ∈ R ( F ) then A ∪ { a } / ∈ R ( F ) for an y a ∈ V . This holds b ecause R ( F ) is a lo wer set b y Theorem 9.4 of Sp era [2026]: if A is unsafe then ev ery sup erset of A is unsafe. Equiv alently , the dual query p H ( A ) := 1 [ A ∈ R ( F )] is anti-monotone : safet y can only b e lost, never gained, as capabilities are added. More generally , any Datalog prop query is monotone in the EDB: if q ∈ Π( D ) and D ⊆ D ′ then q ∈ Π( D ′ ) (since T Π is a monotone op erator). Step 2: the p arity query is non-monotone. Consider U = { p 1 , p 2 } and D 1 = { p 1 } , D 2 = { p 1 , p 2 } . Then | D 1 ∩ U | = 1 (odd, so P arity = 0 ) and | D 2 ∩ U | = 2 (ev en, so P arity = 1 ). Since D 1 ⊊ D 2 but P arity ( D 1 ) = 0 < 1 = P arity ( D 2 ), the parity query is non-monotone. By Step 1, it cannot be expressed in Datalog prop , hence not as a capability hypergraph safety query . Br o ader class of non-expr essible queries. The same argument applies to any non-monotone Bo olean query (e.g., counting queries, threshold queries with a non-trivial lo wer b ound, and any query that can b e falsified b y adding atoms to the database). This is a strict separation: the capabilit y h yp ergraph framew ork exactly captures the monotone (more precisely , co-monotone when framed as safety) Boolean queries expressible in p olynomial time ov er prop ositional databases. Corollary 5.2 (Complexit y Inheritance) . The c ap ability hyp er gr aph safety pr oblem inherits the c omplexity char acterisation of Datalo g prop query evaluation: 1. (Data c omplexity) Fixe d-pr o gr am safety che cking is in P . 2. (Combine d c omplexity) Safety che cking with b oth the hyp er gr aph and c onfigur ation as input is P -c omplete, matching The or em 8.3 of Sp er a [2026]. 3. (Minimal unsafe antichain) De ciding B ∈ B ( F ) is coNP -c omplete in the pr o gr am, matching The or em 9.5 of Sp er a [2026]. Pr o of. P art (1) follows from Immerman’s theorem that Datalog prop query ev aluation is in P in the data [Immerman, 1986]. P art (2) follows from P -completeness of propositional Horn-clause satisfiabilit y (the circuit v alue problem), whic h equals the com bined complexity of Datalog prop query ev aluation [Abiteb oul et al., 1995]; this re-deriv es Theorem 8.3 of Sp era [2026] from the Datalog complexity landscape. Part (3) follows from coNP -completeness of minimal witness mem b ership for monotone Bo olean queries [Eiter and Gottlob, 1995], which b y Theorem 4.3(3) is exactly the problem of deciding B ∈ B ( F ); this re-deriv es Theorem 9.5 of Spera [2026] from database theory . In eac h case the complexit y result is strengthened: it now follo ws from foundational Datalog theory rather than ad-ho c reductions. 6 The Prov enance Theorem Definition 6.1 (Safet y Prov enance Semiring) . F or c ap ability hyp er gr aph H = ( V , F ) and forbidden set F ⊆ V , the safety prov enance of c onfigur ation A is the element of the why- pr ovenanc e semiring (2 2 V , ∪ , ▷ ◁, ∅ , {∅} ) assigne d to the atom fo rbidden in Π H ( D A ) under the enc o ding of Definition 4.1, wher e X ▷ ◁ Y = { x ∪ y : x ∈ X, y ∈ Y } denotes the p airwise union (join) of sets of sets. Theorem 6.2 (Pro venance Theorem) . L et H = ( V , F ) , F ⊆ V . Under the CapHyp → Datalo g prop enc o ding: 1. The derivation c ertific ates of The or em 10.1 of Sp er a [2026] ar e exactly the elements of the why-pr ovenanc e of forbidden in Π H ( D A ) . 2. The minimal unsafe antichain B ( F ) e quals the set of minimal elements of the why- pr ovenanc e of forbidden over al l datab ases D A . 10 3. The c ertific ate verific ation pr o c e dur e (r e-exe cuting the firing se quenc e) is exactly the pr ove- nanc e witness che cking pr o c e dur e of Gr e en et al. [2007]. Pr o of. W e prov e all three parts by structural induction on Datalog deriv ations. Setup. Fix H = ( V , F ), F ⊆ V , and A ⊆ V . Recall that in the why-pro v enance semiring, the prov enance Why ( q , Π , D ) of a deriv ed atom q is the set of all minimal subsets W ⊆ D suc h that q ∈ Π( W ) [Green et al., 2007]. W e write Why ( q ) when Π = Π H and D = D A are clear from con text. Lemma (Why-pro v enance of capability atoms). F or any v ∈ V , the why-pro venance of has ( v ) in Π H o ver D A is: Why ( has ( v )) = W ⊆ A : W is a minimal subset with v ∈ cl H ( W ) . Pr o of of L emma. By induction on the depth d of the shortest deriv ation of has ( v ) in Π H . Base c ase, d = 0 . has ( v ) ∈ D A , so v ∈ A and the only deriv ation uses the EDB fact directly . The unique minimal witness is { v } . Indeed cl H ( { v } ) ∋ v and no strict subset of { v } deriv es v , so Why ( has ( v )) = {{ v }} = { W ⊆ A : v ∈ cl H ( W ) , | W | minimal } . Inductive step, d > 0 . has ( v ) is deriv ed via a rule has ( s 1 ) ∧· · ·∧ has ( s k ) ⇒ has ( v ) corresp onding to hyperedge ( S, { v } ) ∈ F , S = { s 1 , . . . , s k } . Eac h has ( s i ) is derived at depth < d . By the induction hypothesis, Why ( has ( s i )) = { W ⊆ A : W minimal with s i ∈ cl H ( W ) } for each i . The wh y-prov enance semiring computes: Why ( has ( v )) ⊇ W 1 ∪ · · · ∪ W k : W i ∈ Why ( has ( s i )) min , where ( · ) min denotes the an tichain of inclusion-minimal sets. A set W = W 1 ∪ · · · ∪ W k with W i minimal for s i ∈ cl H ( W i ) satisfies S ⊆ cl H ( W ) (since s i ∈ cl H ( W i ) ⊆ cl H ( W ) by monotonicit y), so the hyperedge fires and v ∈ cl H ( W ). Minimalit y of W as a witness for v : if w e remov e any w ∈ W , then some s i / ∈ cl H ( W \ { w } ) (by minimality of the W i ), so the hyperedge cannot fire and v / ∈ cl H ( W \ { w } ) (or is derived b y a strictly longer deriv ation, but w e are taking the an tichain). Therefore Why ( has ( v )) equals exactly the collection of minimal capabilit y sets from whic h v is reachable via closure, whic h is what the lemma claims. P art (1): certificates are why-pro v enance witnesses. Theorem 10.1 of Spera [2026] provides, for each v ∈ Emg ( A ) \ F , a derivation c ertific ate : the firing sequence ( e 1 , . . . , e n ) that pro duces v from A . Under the encoding, this firing sequence is a Datalog deriv ation tree for has ( v ) ov er D A . The certificate records exactly the minimal EDB subset used in the deriv ation, which b y the Lemma is an element of Why ( has ( v )). Con versely , ev ery element W ∈ Why ( has ( v )) is a minimal W ⊆ A with v ∈ cl H ( W ); the corresp onding deriv ation tree constitutes a v alid firing sequence certificate. The bijection is exact. P art (2): B ( F ) = minimal elemen ts of Why ( forbidden ) . Applying the Lemma to fo rbidden : the why-pro v enance of fo rbidden in Π H is the set of minimal W ⊆ V suc h that fo rbidden ∈ Π H ( D W ). By Theorem 4.3(3), these are exactly the elemen ts of B ( F ). Since the wh y-prov enance records only the an tichain of minimal witnesses, and B ( F ) is itself an antic hain (no elemen t contains another, by definition), the set of minimal elemen ts of the why-pro venance is exactly B ( F ). P art (3): certificate verification = prov enance witness c hecking. The certificate verification pro cedure of Sp era [2026] re-executes the firing sequence and c hecks that eac h step is v alid. In the Datalog setting, verifying a pro venance witness W for query q means chec king that q ∈ Π( D W ) (i.e., re-ev aluating the Datalog program on the witness), whic h is exactly re-executing the firing sequence. The t wo procedures are iden tical under the enco ding. 11 Corollary 6.3 (Certificate Algebra) . The derivation c ertific ates of Sp er a [2026] form a c ommuta- tive semiring under: ⊕ (c ertific ate disjunction: either derivation p ath suffic es) and ⊗ (c ertific ate c onjunction: al l derivation p aths r e quir e d). This gives c ertific ates a c anonic al algebr aic structur e enabling c ertific ate c ompr ession, c omp osition, and validation in a uniform fr amework. Pr o of. This follows from the general theory of pro venance semirings [Green et al., 2007]: any collection of deriv ation witnesses inherits the semiring structure. The ⊕ and ⊗ op erations corresp ond to the ∪ and ▷ ◁ of the wh y-prov enance semiring. 7 The Witness Corresp ondence Theorem 7.1 (Witness Corresp ondence) . L et q = (Π H , forbidden ) b e the Datalo g safety query. Then: 1. B ( F ) e quals the set of minimal witnesses for q over { D A : A ⊆ V } . 2. Enumer ating B ( F ) is e quivalent to enumer ating minimal witnesses for q , which is output- p olynomial in |B ( F ) | [Eiter and Gottlob, 1995]. 3. The online c o alition safety che ck of The or em 11.2 of Sp er a [2026] is e quivalent to the Datalo g query “do es the EDB c ontain a sup erset of some minimal witness?”, de cidable in O ( |B ( F ) | · | A | ) . Pr o of. Part (1) is exactly Theorem 4.3(3). P art (2). By P art (1), en umerating B ( F ) is the minimal witness en umeration problem for the monotone Boolean query (Π H , forbidden ). Eiter and Gottlob [1995] sho wed that minimal witness (equiv alently , minimal transv ersal) en umeration for monotone Boolean queries is computable in output-p olynomial time: an algorithm that en umerates all minimal witnesses in time polynomial in | V | + |B ( F ) | exists. (Specifically , for monotone Datalog queries, the minimal witnesses are the minimal transversals of the set system induced by the query , and Eiter–Gottlob’s Algorithm A runs in O ( | V | · |B ( F ) | ) p er witness.) P art (3). The coalition safety chec k asks: do es S i A i ∈ R ( F )? By Theorem 11.2 of Sp era [2026], this is equiv alent to: do es there exist B ∈ B ( F ) with B ⊆ S i A i ? Under the enco ding, B ( F ) is the set of minimal witnesses for (Π H , forbidden ), so the chec k is: do es the EDB D S i A i con tain (as a subset) some minimal witness? This is the Datalog query “ ∃ B ∈ B ( F ) : B ⊆ A ”, which requires chec king each B ∈ B ( F ) against A in O ( | B | ) time, giving total time O ( P B ∈B ( F ) | B | ) ≤ O ( |B ( F ) | · | A | ). 8 Op en Problem Corresp ondence One of the most v aluable consequences of the equiv alence is that the op en problems of Sp era [2026] each map to known open problems in Datalog theory , establishing a researc h agenda in whic h progress transfers immediately . 12 Op en problem in Sp era 2026 Corresp onding op en problem in Datalog Tigh ter P AC b ounds for structured (sparse, lo w fan-in) hypergraph fam- ilies Tigh ter V C-dimension bounds for structured Datalog prop h yp othesis classes; Rademac her com- plexit y for Horn clause learning [Dalmau et al., 2002, Cohen, 1995] Probabilistic closure under corre- lated hyperarc firing Probabilistic Datalog with dep endent tuples; ex- p ectation semiring extensions [Green et al., 2007] NP -hardness of MinUnsafeAdd for budget b ≥ 2 View up date problem for Datalog: inserting rules to make a query true with minimal additions Appro ximate safe audit surfaces for large-scale deploymen ts Appro ximate query answering in Datalog; P AC- mo del counting for Horn queries Incremen tal maintenance of B ( F ) under dynamic hyperedge c hanges Incremen tal view maintenance; DRed algorithm and extensions [Abiteboul et al., 1995] P A C learning tigh tness. The VC-dimension b ound in Theorem 14.2 of Sp era [2026] ( O ( n 2 k )) follo ws Sauer’s lemma applied to the h yp othesis class of Datalog prop programs with at most m rules and maxim um b o dy size k . The open question is whether Rademacher c omplexit y yields tigh ter b ounds for structur e d hypothesis classes (e.g., programs with a fixed predicate dep endency graph). Dalmau et al. [2002] studied similar structured Datalog learning problems in the context of constrain t satisfaction; their tec hniques transfer directly to the capability h yp ergraph setting. Probabilistic Datalog safety . The path-indep endence assumption of Theorem 14.5 of Spera [2026]—h yp erarc firings are independent given their tails—corresp onds to tuple-indep endence in probabilistic databases. The exact computation of r ( v ) = Pr [ v ∈ cl p ( A )] under general tuple dep endence is an open problem in probabilistic Datalog [Green et al., 2007] corresp onding to the #P-hard case of inference in probabilistic Horn programs. The partial results of Sp era [2026] (p olynomial time under indep endence) corresp ond to the tractable case of tuple-indep enden t probabilistic Datalog. View up date for adversarial robustness. MinUnsafeAdd (Theorem 14.7 of Sp era 2026) is NP-hard for b ≥ 2. Under the enco ding, this is the view up date pr oblem for Datalo g : given a Datalog program Π H and a target database D A ∗ that should derive fo rbidden , what is the minim um n um b er of rules to add to make D A ∗ deriv e fo rbidden ? The NP-hardness of the general case and p olynomial tractability of the b = 1 single-rule case match known results in the Datalog view up date literature. 9 Non-Comp ositionality as a Datalog Theorem Theorem 9.1 (Non-Comp ositionality as Datalog Non-Mo dularit y) . L et Π b e a pr op ositional Datalo g pr o gr am and q a query pr e dic ate. Define S ( q ) = { D : q / ∈ Π( D ) } . Then S ( q ) is not close d under union: ther e exist D 1 , D 2 ∈ S ( q ) with D 1 ∪ D 2 / ∈ S ( q ) . Pr o of. Let Π contain the single rule p 1 ∧ p 2 ⇒ q . Set D 1 = { p 1 } and D 2 = { p 2 } . Then q / ∈ Π( D 1 ) (the rule requires p 2 , absen t from D 1 ) and q / ∈ Π( D 2 ) (requires p 1 , absen t from D 2 ). But q ∈ Π( D 1 ∪ D 2 ) since { p 1 , p 2 } ⊆ D 1 ∪ D 2 and the rule fires. Therefore D 1 , D 2 ∈ S ( q ) but D 1 ∪ D 2 / ∈ S ( q ). The non-comp ositionality of safet y (Theorem 9.2 of Sp era 2026) is this theorem under the enco ding, with p 1 = has ( u 1 ), p 2 = has ( u 2 ), and q = has ( f ) for f ∈ F . 13 Remark 9.2. This p ersp e ctive pr ovides a structur al explanation of non-c omp ositionality: it is a c onse quenc e of AND-rule semantics in pr op ositional Datalo g derivation systems. The c ap ability hyp er gr aph r esult identifies the minimal instanc e (thr e e no des, one rule), but the phenomenon is a pr op erty of Datalo g prop derivations in gener al. A ny system with c onjunctive rules wil l exhibit non-c omp ositional safety. 10 Empirical Grounding The results in this pap er are purely theoretical; all guarantees hold for arbitrary capability h yp ergraphs. The follo wing is illustrative only , re-expressing the empirical study of Sp era [2026] in Datalog vocabulary to show the enco ding is concrete. Detailed deriv ation trees and the aggregate statistics table app ear in Appendix A; w e summarise the k ey correspondences here. In the 12-capability T elco deplo yment, eac h agen t session corresp onds to an EDB and a Datalog ev aluation: 42.6% of sessions pro duce a least mo del con taining at least one emergent atom ( Emg ( A ) = ∅ ), meaning conjunctiv e rules are necessary for correct deriv ation in nearly half of real tra jectories. Eac h observ ed AND-violation is an instance of Theorem 9.1: t wo individually safe EDB subsets whose union deriv es fo rbidden — 38.2% of conjunctive sessions under the w orkflow planner. The 0% AND-violation rate of the hypergraph planner in this corpus is consistent with the soundness guaran tee of Theorem 4.3: correct Datalog prop ev aluation never pro duces non-mo dularit y violations b y construction; the empirical observ ation corrob orates but do es not prov e the theorem for arbitrary deplo yments. The 900 sessions th us provide indep enden t empirical instan tiations of the correctness guaran tee, illustrating the theoretical transfer in observ ed agen t behaviour. 11 Capabilit y Safet y Admits Efficien t Incremen tal Main tenance The safe audit surface G F ( A ) of Sp era [2026] is the cen tral computational ob ject of the capabilit y h yp ergraph framew ork: it certifies ev ery safely acquirable capability , every capabilit y one step from current reac h, and ev ery structurally forbidden path. In Spera [2026], computing G F ( A ) from scratch costs O ( | V | · ( n + mk )), and ev ery change to the deplo yed hypergraph—a to ol added, a capabilit y revok ed, a new agent joining the coalition—requires full recomputation. The Datalog iden tification established in Section 4 enables an improv ed main tenance algorithm for deploymen ts where hyperedge changes are lo calised. The k ey observ ation is that G F ( A ) is not merely a function of the hypergraph—it is a Datalo g view : a deriv ed relation computed by a fixed stratified program Π G F o ver the capabilit y database D A . Datalog views admit incremental main tenance under the DRed (Delete and Re-derive) algorithm [Abiteb oul et al., 1995], whose p er-up date cost depends on the size of the change , not the size of the full program. When changes are global—affecting a large fraction of V —the adv antage o ver na ¨ ıv e recomputation diminishes; the b enefit is greatest for the sparse, lo calised up dates t ypical in en terprise deplo yments. Op eration Na ¨ ıv e recomp. DRed (this pap er) Initial computation of G F ( A ) O ( | V | · ( n + mk )) O ( | V | · ( n + mk )) Hyp eredge insertion, activ e ( S ⊆ cl( A )) O ( | V | · ( n + mk )) O ( | ∆ | · ( n + mk )) Hyp eredge insertion, lazy ( S ⊆ cl( A )) O ( | V | · ( n + mk )) O ( | S | ) Hyp eredge deletion O ( | V | · ( n + mk )) O ( | ∆ | · ( n + mk )) Con tainmen t G F ( A ) ⊆ G F ( A ′ ) not known decidable P 14 Here | ∆ | denotes the n umber of derived atoms directly affected by the change. F or sparse h yp ergraphs with lo calised up dates—the t ypical case in enterprise deplo yments where a single new API or to ol is added— | ∆ | ≪ | V | , and the maintenance cost is negligible relative to recomputation. The con tainmen t decidabilit y result is entirely new: it has no counterpart in the h yp ergraph framew ork of Sp era [2026]. Theorem 11.1 (Audit Surface as Datalog View — Primary Result) . The safe go al disc overy map G F ( A ) of Sp er a [2026] is the r esult of evaluating a fixe d pr op ositional Datalo g pr o gr am Π G F (with str atifie d ne gation) over D A . Conse quently: 1. G F ( A ) is maintainable under hyp er e dge insertions and deletions using the DR e d algorithm in O ( | ∆ | · ( n + mk )) p er up date. 2. G F ( A ) is optimisable using magic sets r ewriting, r e ducing c omputation to c ap abilities r e achable fr om the query pr e dic ate. 3. Query c ontainment G F ( A ) ⊆ G F ( A ′ ) is de cidable in p olynomial time by Datalo g prop query c ontainment for pr op ositional pr o gr ams. Pr o of. W e first construct Π G F explicitly , then verify it captures all three comp onen ts of G F ( A ) = (Emg( A ) \ F , NMF F ( A ) , top- k ). Construction of Π G F . Base layer (str atum 0): c ap ability closur e. The capability closure rules are exactly Π H (Definition 4.1). This stratum derives has ( v ) for all v ∈ cl( A ). Str atum 1: singleton closur e and emer gent c ap abilities. Add rules has ( s ) ⇒ has single ( v ) for eac h singleton-tail hyperedge ( { s } , { v } ) ∈ F , iterated to full singleton closure. Then: has ( v ) ∧ ¬ has single ( v ) ∧ ¬ in A ( v ) ⇒ emergent ( v ) where in A ( v ) is the EDB atom has ( v ) ∈ D A . The condition v ∈ cl ( A ) \ A and v / ∈ cl 1 ( A ) captures Emg ( A ) exactly . Adding ¬ fo rbidden ( v ) (from the safety la yer) restricts to Emg ( A ) \ F . Str atum 2: b oundary dete ction and NMF. F or each h yp eredge e = ( S, { v } ) ∈ F with S = { s 1 , . . . , s k } , we m ust iden tify whether exactly one elemen t of S is missing from cl ( A ). W e do this without arithmetic by writing one rule per candidate missing atom. F or eac h i ∈ { 1 , . . . , k } , introduce a predicate all except ( e, s i ) asserting that ev ery element of S other than s i is in cl( A ): has ( s 1 ) ∧ · · · ∧ has ( s i − 1 ) ∧ has ( s i +1 ) ∧ · · · ∧ has ( s k ) ⇒ all except ( e, s i ) . This is one rule p er ( e, i ) pair, giving at most P e ∈F | S ( e ) | = O ( mk ) rules in total across all h yp eredges. A h yp eredge e is on the b oundary ∂ ( A ) with missing atom s i iff all except ( e, s i ) holds and has ( s i ) do es not hold: all except ( e, s i ) ∧ ¬ has ( s i ) ⇒ b ounda ry miss ( e, s i ) . W e m ust further verify that s i is the unique missing elemen t — i.e., there is no other s j ( j = i ) also absent. This is guaran teed by the structure of all except ( e, s i ): that predicate fires only when all of s 1 , . . . , s i − 1 , s i +1 , . . . , s k are presen t, so s i is the sole missing elemen t. Therefore b ounda ry miss ( e, s i ) is deriv able iff e ∈ ∂ ( A ) with µ ( e ) = { s i } , exactly as required. Remark 11.2 (Stratification depth) . The pr e dic ate all except ( e, s i ) is c ompute d fr om has ( · ) (str atum 0) using only p ositive rules: str atum depth 0. The pr e dic ate b ounda ry miss ( e, s i ) uses one applic ation of ¬ has ( s i ) : str atum depth 1. A dding ¬ fo rbidden (derive d in str atum 2 via the safety rules) r e quir es a further str atum for the NMF safety filter. The ful l pr o gr am is ther efor e str atifie d with four str ata (0: closur e; 1: singleton closur e + b oundary dete ction; 2: emer gent + forbidden; 3: NMF + top- k ), satisfying the standar d str atific ation c ondition [Abiteb oul et al., 1995]. 15 The NMF predicate is then: b ounda ry miss ( e, s ) ∧ ¬ fo rbidden if added ( s ) ⇒ nmf ( s ) , where fo rbidden if added ( s ) is computed in stratum 3 by ev aluating a fresh copy of the closure rules on D A ∪ { has ( s ) } and chec king whether fo rbidden is derived. Since | V | is fixed, this is a finite program. Str atum 3: mar ginal gain for top- k . The marginal gain γ F ( v , A ) = | cl ( A ∪ { v } ) \ ( cl ( A ) ∪ F ) | requires ev aluating a closure for eac h v ∈ V \ cl ( A ). This is expressible as a b ounded Datalog program (one fresh cop y of the closure rules p er candidate v ), but since | V | is fixed, the en tire stratum is a finite fixed program. The top- k computation selects the k atoms v with highest gain, expressible via iterated join + selection in stratified Datalog. V erification that Π G F ( D A ) captures G F ( A ) . By construction: emergent ( v ) ∈ Π G F ( D A ) iff v ∈ Emg ( A ) \ F ; nmf ( s ) ∈ Π G F ( D A ) iff s ∈ NMF F ( A ); and the top- k selection identifies the top capabilities b y marginal gain. All three comp onen ts of G F ( A ) are captured. P art (1): DRed incremental main tenance. Π G F is a stratified Datalog program (finitely man y strata, each a standard Datalog program or one with simple stratified negation). The DRed (Delete and Re-deriv e) algorithm of Abiteb oul et al. [1995] provides incremen tal maintenance for suc h programs. When a hyperedge e = ( S, { v } ) is added or deleted, the affected deriv ations are exactly those in volving e . The num b er of re-deriv ations is O ( | ∆ | · ( n + mk )) p er update, where | ∆ | counts directly affected atoms. P art (2): magic sets optimisation. Magic sets rewriting [Abiteboul et al., 1995] transforms a Datalog program in to an equiv alent one that propagates query goals top-down, computing only the atoms relev an t to answ ering the query . Applied to Π G F , this restricts the closure computation to capabilities reachable from the NMF and emergen t capability queries, p oten tially reducing computation from O ( | V | · ( n + mk )) to the subset of the h yp ergraph relev ant to the curren t configuration. P art (3): query con tainment. Prop ositional Datalog query containmen t—deciding whether q ∈ Π( D ) implies q ∈ Π ′ ( D ) for all databases D —is decidable in p olynomial time for prop ositional (ground, monadic) programs, since it reduces to chec king a finite n umber of critical instances. G F ( A ) ⊆ G F ( A ′ ) is equiv alently: for all v , emergent ( v ) ∈ Π G F ( D A ) implies emergent ( v ) ∈ Π G F ( D A ′ ), and similarly for NMF and top- k . This is a finite collection of con tainment chec ks on Π G F , decidable in p olynomial time. Theorem 11.3 (Lo cality Gap for Capabilit y Safet y Main tenance) . L et H = ( V , F ) b e a c ap ability hyp er gr aph with n = | V | , m = |F | , maximum tail size k , and safe audit surfac e G F ( A ) as in Definition 3.4. We pr ove the fol lowing lower b ound for c ap ability hyp er gr aphs sp e cific al ly. 1. (L o c al incr emental upp er b ound.) If the up date affe cts dep endency c one ∆ ⊆ V , then G F ( A ) c an b e up date d in O ( | ∆ | · ( n + mk )) time via the Datalo g view-maintenanc e formulation of The or em 11.1. 2. (Strict sep ar ation.) Ther e exists an infinite family { H ′ n } n ≥ 1 and single-hyp er e dge up dates { u ′ n } such that | ∆ n | = O (1) but | V n | = n , so incr emental maintenanc e c osts O ( n ) while na ¨ ıve r e c omputation c osts Ω( n 2 ) . The gap is unb ounde d. 3. (AND-insp e ction lower b ound.) L et u = ( S u , { v u } ) b e the inserte d hyp er e dge. A ny deterministic algorithm that c orr e ctly c omputes the up date d G F ( A ) must insp e ct al l inputs to the up date d rule in the worst c ase: it must pr ob e every atom in the up date witness set Φ( u ) = S u ∪ { v u } . Ther efor e: (atom insp ections to v erify rule activ ation) ≥ | S u | + 1 = k + 1 . DR e d matches this lower b ound for rule-activation verific ation under this mo del: it pr ob es exactly Φ( u ) at the firing fr ontier b efor e pr op agating downstr e am. 16 Pr o of. Remark 11.4 (Recomputation cost) . Na ¨ ıve r e c omputation of G F ( A ) after any single-hyp er e dge up date c osts O ( | V | · ( n + mk )) : the top- k mar ginal-gain c omp onent r e quir es evaluating a closur e for e ach of the | V | c andidate atoms. On the chain family F n = { ( { v i } , { v i +1 } ) } , an insertion changes the gain of every downstr e am no de, so any algorithm outputting the ful l up date d r anking r e ads Ω( n ) entries e ach c osting Ω( n + mk ) ; this gives an output-sensitive lower b ound of Ω( n · ( n + mk )) for na ¨ ıve r e c omputation on this family. P art (1): Lo cal incremental upp er b ound. This follows directly from Theorem 11.1 P art (1). The DRed algorithm maintains Π G F under single-rule insertions and deletions (corresponding to h yp eredge updates) b y re-deriving only atoms whose deriv ations pass through the c hanged rule. F ormally , the dep endency c one of an up date u is: ∆( u ) = { a ∈ V : some minimal deriv ation of has ( a ) in Π G F uses rule r u } , where r u is the Datalog rule corresp onding to the up dated h yp eredge. DRed re-derives exactly ∆( u ), eac h at cost O ( n + mk ), giving total update cost O ( | ∆( u ) | · ( n + mk )). Since | ∆( u ) | ≤ | V | , this is alw ays at most the cost of full recomputation, and strictly less whenev er | ∆( u ) | ≪ | V | . P art (3): Strict separation (pro of ). W e construct the family { H n } explicitly . Construction. Let V n = { v 1 , . . . , v n , w } for n ≥ 2. Define a single hyperedge e n = { v 1 , . . . , v n − 1 } , { v n } , S n = { v 1 , . . . , v n − 1 } . Set the initial configuration A n = { v 1 , . . . , v n − 1 } , forbidden set F n = { w } , and k = n − 1 (tail size of e n ). Baseline audit surfac e. cl H n ( A n ) = { v 1 , . . . , v n } (all of V n \ { w } , since e n fires when S n ⊆ A n ). The top- k marginal gains are: γ F n ( w , A n ) = 0 (adding w violates F n ) and γ F n ( v i , A n ) = 0 for all i ≤ n − 1 (already in A n ). So the top- k list is a singleton { ( v n , 0) } ... but note v n ∈ cl ( A n ), so the NMF is empty and the emergent set is { v n } . The up date. Insert the hyperedge u n = ( { v n } , { w ′ } ) for a fresh no de w ′ / ∈ V n , with w ′ / ∈ F n . After the up date: V n ← V n ∪ { w ′ } , cl ( A n ) = { v 1 , . . . , v n , w ′ } , and w ′ is no w emergent (reac hable via e n then u n , not via an y singleton chain from A n ). Dep endency c one. The rule added to Π G F is r u n : has ( v n ) ⇒ has ( w ′ ). The only atom whose deriv ation uses r u n is has ( w ′ ). Therefore: ∆ n = { w ′ } , | ∆ n | = 1 = O (1) . Incr emental c ost. DRed re-deriv es only has ( w ′ ), which requires one rule firing and a closure ev aluation of cost O ( n + mk ) = O ( n + n ) = O ( n ) for this family (since m = 2, k = n − 1). T otal up date cost: O (1) · O ( n ) = O ( n ). Na ¨ ıve r e c omputation c ost. After the up date, na ¨ ıve recomputation of G F ( A n ) must: (i) recom- pute cl ( A n ) at cost O ( n + mk ) = O ( n ); (ii) recompute γ F ( v , A n ) for eac h v ∈ V n ∪ { w ′ } \ cl ( A n ); since cl ( A n ) = V n ∪ { w ′ } \ { w } , the only candidate is v = w , costing O ( n ); (iii) recom- pute the NMF b y chec king all m = 2 h yp eredges at cost O ( mk ) = O ( n ); (iv) recompute the top- k marginal gain table, whic h requires ev aluating γ F ( v , A n ) for eac h of the n no des v ∈ { v 1 , . . . , v n , w ′ } \ A n = { v n , w ′ } ... The strict Ω( n 2 ) separation for the top- k comp onen t requires a richer family . Replace H n with H ′ n : V ′ n = { v 1 , . . . , v n } ∪ { x 1 , . . . , x n } , A ′ n = { v 1 , . . . , v n − 1 } , F ′ n = ∅ , and h yp eredges e n = ( { v 1 , . . . , v n − 1 } , { v n } ) plus f i = ( { v n } , { x i } ) for i = 1 , . . . , n . Before up date: cl ( A ′ n ) = { v 1 , . . . , v n , x 1 , . . . , x n } and the marginal gains γ F ′ n ( x j , A ′ n ) for x j / ∈ A ′ n eac h require a separate closure ev aluation at cost O ( | V ′ n | ) = O ( n ), with n suc h candidates, totalling O ( n 2 ). 17 Insert u ′ n = ( { v n − 1 } , { v ′ n } ) for fresh v ′ n / ∈ V ′ n : now ∆ ′ n = { v ′ n } ( | ∆ ′ n | = 1, same as b efore) while na ¨ ıv e recomputation m ust re-ev aluate all n marginal-gain closures, eac h O ( n ), giving Ω( n 2 ). The asymptotic gap. Incremen tal maintenance costs O ( | ∆ ′ n | · | V ′ n | ) = O ( n ). Na ¨ ıve recompu- tation costs Ω( n 2 ). The ratio is Ω( n ), growing without bound. F or all n ≥ 1 the family { H ′ n } witnesses the strict separation claimed in P art (3). P art (4): AND-insp ection low er b ound (pro of ). W e prov e that an y correct incremental algorithm m ust prob e ev ery atom in the up date witness set Φ( u ) = S u ∪ { v u } of the updated h yp eredge u = ( S u , { v u } ). Since | Φ( u ) | = | S u | + 1 = k + 1 where k = | S u | is the tail size, this giv es an Ω( k ) low er bound on prob es p er update. Step 1: Computation mo del. Algorithm A main tains G F ( A ) under hyperedge updates. It op erates in the hyp er gr aph or acle mo del : after receiving up date u , A may issue unit-cost prob es of tw o types: • A tom prob e p rob e ( a ): returns ( inA ( a ) , inCl ( a )) where inA ( a ) = 1 [ a ∈ A ] and inCl ( a ) = 1 [ a ∈ cl F ( A )] (deriv ability status under the curren t rule set, b efor e the up date). • Rule prob e p rob e ( e ): returns whether e ∈ F , and if so its tail S ( e ) and head { v ( e ) } . The algorithm’s only access to the instance is through these prob es; it has no other channel to the h yp ergraph or the closure state. A certificate or index computed in a prior round coun ts as zero cost only if it w as itself built from prob es whose cost is c harged to that round. Crucially , an y cac hed structure not built from a prob e of p cannot distinguish I + p from I − p (Lemma 11.5), so it cannot guide a correct up date on both instances sim ultaneously . A is c orr e ct if it alw ays outputs the exact G F ( A ) after eac h up date. W e say A is Φ -avoiding if after receiving u it outputs G F ( A ) without probing some p ∈ Φ( u ) = S u ∪ { v u } . W e prov e that no Φ-av oiding algorithm is correct. Why this mo del is natur al. A tom probes are the natural unit of incremental main tenance: an y algorithm that up dates G F ( A ) m ust at minimum determine the curren t deriv ation status of atoms touched by the up date. Rule probes mo del reading h yp eredge definitions to determine whic h rules are active. The mo del is not weak er than reality: a real implemen tation ma y use hash tables or indices, but an y such structure was itself built b y reading atoms, and the cost of building it is charged to the round in which those reads o ccurred. Specifically , the “only access” clause in the mo del handles precomputed certificates: a certificate that w as built in a prior round already paid for its prob es then; if it was built without prob es, it must still distinguish our paired instances (which differ only at p ) to correctly guide the current up date — and distinguishing them requires probing p . The oracle mo del therefore captures the minimal information any correct main tenance algorithm m ust acquire, regardless of data structure or implementation strategy . Lemma 11.5 (Indistinguishability) . F or any up date u = ( S u , { v u } ) and any p ∈ Φ( u ) = S u ∪ { v u } , ther e exist two hyp er gr aph instanc es I + p and I − p such that: (i) every atom pr ob e of b = p and every rule pr ob e r eturns the same answer on b oth; (ii) probe ( p ) r eturns differ ent answers on the two instanc es; and (iii) the c orr e ct output G F ( A ) differs b etwe en the two instanc es. Pr o of of L emma 11.5. Fix u = ( S u , { v u } ) with S u = { s 1 , . . . , s k } . W e construct one pair p er atom. 18 Pairs for tail atoms s j ∈ S u . Fix s j ∈ S u . Choose fresh no des x 1 , . . . , x k ′ with k ′ ≥ 1. Construct: H + s j = r s j : ( { x 1 , . . . , x k ′ } , { s j } ) ∪ F 0 , H − s j = F 0 , where F 0 is a base rule set con taining u and all rules not inv olving s j , and with A = { x 1 , . . . , x k ′ } ∪ ( S u \ { s j } ). In H + s j : s j ∈ cl ( A ) (via r s j ), all of S u ⊆ cl ( A ), so r u fires and v u ∈ cl H + s j ∪{ u } ( A ). In H − s j : s j / ∈ cl( A ) (no rule derives it, s j / ∈ A ), so r u cannot fire and v u / ∈ cl H − s j ∪{ u } ( A ). Observable differ enc e. probe ( s j ) returns inCl ( s j ) = 1 on H + s j and inCl ( s j ) = 0 on H − s j . Every other pr ob e r eturns the same answer. Both instances share F 0 (so all rule prob es for e = r s j agree), and r s j is prob ed directly — but r s j has head s j , so learning its presence or absence tells the algorithm exactly inCl ( s j ). F or all atoms b = s j : b ∈ cl ( A ) iff b ∈ cl ( A ) in b oth instances (deriv ations of b do not use r s j , since r s j is the only rule with head s j and no rule has s j in its b o dy in F 0 ). Therefore every probe of b = s j returns the same answer on b oth instances. Pair for the he ad atom v u . Construct: H + v u = F 0 with A + = A, H − v u = F 0 with A − = A ∪ { v u } , where F 0 con tains u and all of S u ⊆ cl( A ) is satisfied. In H + v u : v u / ∈ A , v u ∈ cl F 0 ∪{ u } ( A ) (via r u ), and no singleton path reaches v u (c ho ose F 0 without singleton arcs to v u ), so v u ∈ Emg( A ). In H − v u : v u ∈ A − , so v u / ∈ Emg ( A − ) by Definition 3.4. Observable differ enc e. probe ( v u ) returns inA ( v u ) = 0 on I + v u and inA ( v u ) = 1 on I − v u . Every other pr ob e r eturns the same answer. Both instances ha ve the same F 0 ∪ { u } . F or all b = v u : inA ( b ) is the same ( A − = A ∪ { v u } ) and inCl ( b ) is the same (adding v u to A only affects atoms reachable from v u , which w e ensure ha v e no outgoing rules in F 0 ). Step 3: Differing correct outputs. F or eac h pair, the correct outputs differ: • T ail atom s j : v u ∈ cl ( A ) after u on H + s j (rule fires) but v u / ∈ cl ( A ) on H − s j (rule blo c k ed). If v u / ∈ F , this c hanges the emergent-capabilit y s et; if v u ∈ F , it c hanges safety status. Either wa y G F differs. • He ad atom v u : v u ∈ Emg( A ) on I + v u but v u / ∈ Emg ( A − ) on I − v u . The emergen t sets differ. Step 4: An y Φ -av oiding algorithm fails. Let A be an y deterministic algorithm that skips p rob e ( p ) for some p ∈ Φ( u ). Case p = s j ∈ S u : By Lemma 11.5(i), every probe of b = s j and ev ery rule prob e returns the same answer on I + s j and I − s j . Since A is Φ-av oiding it never issues p rob e ( s j ). Therefore ev ery prob e A issues receiv es the same response on b oth instances throughout its en tire execution. By determinism A pro duces iden tical output on I + s j and I − s j . By Lemma 11.5(iii), the correct outputs differ. Therefore A errs on at least one of the t wo instances. Case p = v u : By Lemma 11.5(i), ev ery prob e of b = v u and ev ery rule prob e returns the same answer on I + v u and I − v u . Since A nev er issues p rob e ( v u ), every probe it issues returns the same response on b oth instances. By determinism A pro duces iden tical output on both. By Lemma 11.5(iii), the correct outputs differ. Therefore A errs on at least one of the t wo instances. 19 Since A fails for every skipped p ∈ Φ( u ), any correct algorithm m ust prob e ev ery atom in Φ( u ) = S u ∪ { v u } . This argumen t is an instance of Y ao’s minimax principle [Y ao, 1977]: the lo wer b ound is witnessed by a distribution ov er inputs (the paired instances I ± p ) on which no deterministic algorithm can succeed without probing p . Concretely: (prob es of any correct algorithm after up date u ) ≥ | S u | + 1 = k + 1 . Consequence and connection to AND-seman tics. The low er b ound Ω( k ) follows from the AND-condition structure of capabilit y hyperedges: to determine whether r u fires, an algorithm must chec k all k preconditions in S u . Systems with only singleton-tail rules need O (1) prob es p er update; AND-rules require Ω( k ) prob es for rule-activ ation v erification. DRed prob es exactly Φ( u ) plus downstream effects: (w ork of DRed) = O ( k + | ∆( u ) | ) · ( n + mk ) . DRed matches the lo w er b ound for v erifying rule activ ation; the do wnstream propagation cos t O ( | ∆( u ) | · ( n + mk )) is separately justified b y P art (2). Remark 11.6 (Why the lo wer b ound requires AND-seman tics) . The AND-insp e ction lower b ound (Part 4) fol lows fr om the c onjunctive structur e of c ap ability hyp er e dges. A n algorithm must pr ob e al l k tail atoms S u = { s 1 , . . . , s k } to determine whether r u fir es: if it skips any s j , we c onstruct an instanc e wher e s j / ∈ cl ( A ) (so r u is blo cke d) that is observational ly identic al to the original. An algorithm must also pr ob e v u to determine whether r u changes anything: if v u ∈ A alr e ady, the up date has no effe ct on G F ( A ) . This Ω( k ) b ound is a structur al pr op erty of AND-rules that has no analo gue in singleton-rule (OR) systems: for a system with only singleton-tail hyp er e dges, an up date u = ( { s } , { v } ) r e quir es only 2 pr ob es ( s and v ) r e gar d less of gr aph size. The AND-c ondition structur e is pr e cisely what makes safety non-c omp ositional (The or em 9.1), and it is what for c es the insp e ction lower b ound. The L o c ality Gap The or em as a whole c annot b e state d or pr ove d within the hyp er gr aph fr amework of Sp er a [2026] alone. The upp er b ound in Part (2) is the DR e d guar ante e, which exists only b e c ause Π G F is a Datalo g view (The or em 11.1). The lower b ound in Part (4) uses the or acle mo del and explicit instanc e p airs to pr ove that AND-pr e c ondition insp e ction is unavoidable. T o gether they char acterise the up date c omplexity of c ap ability safety maintenanc e. 12 Discussion and F uture W ork 12.1 What the Equiv alence Settles Expressivit y . The capability h yp ergraph framew ork captures exactly Datalog prop —no more, no less. It can express any monotone Bo olean function of the initial capability set, but cannot express non-monotone queries (counting, parit y , threshold with non-trivial low er b ounds) or queries requiring arithmetic. This is a precise, mo del-theoretic c haracterisation of the framew ork’s scop e. Optimalit y of the closure algorithm and the main tenance gap. The O ( n + mk ) worklist of Sp era [2026] is essentially optimal for Datalog prop ev aluation, matc hing the linear-time lo wer b ound for Horn clause forward c haining. Theorem 11.1(2) sho ws that magic sets rewriting can impro ve constan ts in practice, but the asymptotic bound is tigh t. The Locality Gap Theorem 11.3 establishes a strictly stronger result: for the maintenanc e problem, the Datalog view formulation ac hieves an Ω( n ) asymptotic adv antage o ver na ¨ ıv e recomputation on the explicit family { H ′ n } , with the gap gro wing without bound. This separation is the first formal evidence that the Datalog identification is not merely a representational con venience but enables algorithmically sup erior pro cedures. 20 Structural source of coNP -hardness. The coNP -hardness of computing B ( F ) is not an acciden t of the safet y framing—it is the coNP -hardness of minimal witness enumeration for monotone Bo olean queries, a fundamen tal result in database theory [Eiter and Gottlob, 1995]. This gives the hardness result a deep er explanation. 12.2 What the Equiv alence Op ens Probabilistic safety . Probabilistic Datalog prop [Green et al., 2007] provides the framework for extending capability safet y to sto c hastic to ol in v o cations. The path-indep endence assump- tion of Spera [2026] corresp onds to tuple-indep endence in probabilistic databases, and the kno wn tractabilit y results for tuple-indep endent probabilistic Datalog directly c haracterise when probabilistic safety can b e computed efficien tly . Incremen tal maintenance. The DRed algorithm and its successors provide prov ably correct incremen tal view maintenance, extending the dynamic hypergraph theorems of Sp era [2026] with formal guaran tees on the num b er of re-deriv ations—a quan tity not previously bounded b y the capability framew ork. Learning. Datalog prop learning theory [Dalmau et al., 2002] pro vides tigh ter sample complexit y b ounds for structured h yp othesis classes, directly addressing the loose P AC bound of Sp era [2026]. In particular, structured hypergraph families (lo w fan-in, sparse, tree-structured dep endency) corresp ond to structured Datalog prop programs whose Rademacher complexit y is substan tially smaller than the Sauer–Shelah b ound. Distributed safet y . Distributed Datalog prop ev aluation pro vides a framew ork for extending coalition safet y c hecking to systems where capabilities are distributed across no des and no single no de has global visibilit y—a setting not addressed b y Spera [2026]. 12.3 Limitations The equiv alence established in this pap er is for pr op ositional, monotone capabilit y systems: h yp eredges ha ve p ositiv e preconditions, capabilit y sets are finite, and safet y is defined by a fixed forbidden set. Extensions to non-monotone settings (capabilities that can b e revok ed), proba- bilistic capabilities (h yp erarc firing with uncertaint y), or higher-order interactions (capabilities whose semantics dep end on con text) are not captured directly by the Datalog prop iden tification and require further study . The lo wer bound results (Theorem 11.3 P art 4) apply within the oracle mo del defined in Section 11; a general-purp ose RAM low er bound for full incremental main tenance remains op en. A Empirical Detail: Deriv ation T rees and Aggregate Statistics A.1 A Represen tative T ra jectory as a Datalog Deriv ation Sp era [2026] used a 12-capabilit y T elco deplo ymen t ( n = 12, m = 6, k = 2). A joint billing– service session with EDB D joint = { has ( c 1 ) , has ( c 2 ) , has ( c 3 ) , has ( c 5 ) , has ( c 7 ) , has ( c 8 ) } pro duces the deriv ation: has ( c 1 ) ⇒ has ( c 3 ) , has ( c 7 ) [depth 1] has ( c 2 ) ⇒ has ( c 4 ) , has ( c 5 ) , has ( c 8 ) [depth 1] has ( c 3 ) ∧ has ( c 5 ) ⇒ has ( c 6 ) [depth 2] has ( c 7 ) ∧ has ( c 8 ) ⇒ has ( c 9 ) [depth 2] 21 The atom c 9 (ServicePro vision) is emergent: c 9 ∈ Emg ( A bill ∪ A svc ), with why-pro venance Why ( has ( c 9 )) = {{ c 1 , c 2 }} . A.2 A Real AND-Violation as Datalog Non-Mo dularity The canonical billing+paymen t failure reac hes c 12 (forbidden, PCI-DSS 4.0). With D bill = { has ( c 1 ) , . . . , has ( c 5 ) } and D pay = { has ( c 1 ) , has ( c 2 ) , has ( c 10 ) } : b oth are individually safe, but D bill ∪ D pay con tains { has ( c 3 ) , has ( c 10 ) } whic h fires rule h 6 to deriv e has ( c 12 ) and then fo rbidden . Minimal witness: { c 3 , c 10 } ∈ B ( F ). A.3 Aggregate Statistics Datalog-v o cabulary statement Empirical v alue Sessions with Emg ( A ) = ∅ (conjunctiv e deriv ation re- quired) 42.6% [39.4, 45.8] AND-violations under workflo w planner (non-mo dularity instances) 38.2% [33.4, 43.1] AND-violations under hypergraph planner (correct Datalog prop ev aluation) 0% (Thm. 4.3) Mean deriv ation depth, conjunctiv e sessions 2.4 steps Mean deriv ation depth, long-c hain sessions 9.1 steps (1 . 70 × BFS) References S. Abiteb oul, R. Hull, and V. Vian u. F oundations of Datab ases . Addison-W esley , 1995. S. Ceri, G. Gottlob, and L. T anca. What you alw a ys w an ted to know ab out Datalog (and never dared to ask). IEEE T r ansactions on Know le dge and Data Engine ering , 1(1):146–166, 1989. V. Dalmau, P . G. Kolaitis, and M. Y. V ardi. Constrain t satisfaction, bounded treewidth, and finite-v ariable logics. In CP 2002 , pages 310–326, 2002. T. Eiter and G. Gottlob. Iden tifying the minimal transversals of a hypergraph and related problems. SIAM Journal on Computing , 24(6):1278–1304, 1995. T. J. Green, G. Karv ounarakis, and V. T annen. Prov enance semirings. In Pr o c e e dings of PODS 2007 , pages 31–40, 2007. N. Immerman. Relational queries computable in p olynomial time. Information and Contr ol , 68(1–3):86–104, 1986. C. Sp era. Safet y is non-compositional: A formal framew ork for capability-based AI systems. arXiv pr eprint arXiv:2603.15973 , March 2026. M. H. v an Emden and R. A. Kow alski. The semantics of predicate logic as a programming language. Journal of the ACM , 23(4):733–742, 1976. A. Benv eniste, B. Caillaud, D. Nick ovic, et al. Contracts for system design. F oundations and T r ends in Ele ctr onic Design Automation , 12(2–3):124–400, 2018. W. W. Cohen. Pac-learning non-recursive Prolog clauses. A rtificial Intel ligenc e , 79(1):1–38, 1995. K. Erol, J. Hendler, and D. S. Nau. HTN planning: Complexit y and expressivity . In AAAI-94 , pages 1123–1128, 1994. 22 R. F agin. Generalize d first-order spectra and p olynomial-time recognizable sets. In R. Karp, editor, Complexity of Computation , SIAM–AMS Pro ceedings, pages 43–74. American Mathe- matical So ciety , 1974. C. B. Jones. T entativ e steps tow ard a developmen t metho d for in terfering programs. A CM T r ansactions on Pr o gr amming L anguages and Systems , 5(4):596–619, 1983. J. Leike, M. Martic, V. Krak ovna, et al. AI safet y gridw orlds. arXiv pr eprint arXiv:1711.09883 , 2017. T. Murata. Petri nets: Prop erties, analysis and applications. Pr o c e e dings of the IEEE , 77(4):541– 580, 1989. Y. Qin, S. Liang, Y. Y e, et al. T o olLLM: F acilitating large language mo dels to master 16,000+ real-w orld APIs. In ICLR 2024 (Sp otlight) , 2023. Y. Shen et al. T askBenc h: Benchmarking large language mo dels for task automation. arXiv:2311.18760 , 2023. A. C.-C. Y ao. Probabilistic computations: tow ard a unified measure of complexit y . In Pr o c e e dings of the 18th Annual Symp osium on F oundations of Computer Scienc e (FOCS) , pages 222–227. IEEE, 1977. 23
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment