Exact Consistency Under Partial Views: Graph Colorability, Capacity, and Equality in Multi-Location Encodings

We construct a structural theory of failure for multi-location encodings. Admissible partial views induce a confusability graph on latent tuples; in the exact coordinate-view model, this graph class is exactly characterized by upward-closed families …

Authors: Tristan Simas

1 Exact Consistenc y Under P artial V ie ws: Graph Colorability , Capacity , and Equality in Multi-Location Encodings T ristan Simas Abstract —W e construct a structural theory of failure and a graph-capacity framework f or analyzing structural integrity under distributed sour ce coding. Structural integrity means the code can correct erasures: the mapping from source to observ- able syndrome is injective. Admissible partial views induce a confusability graph on latent tuples; in the exact coordinate-view model, this graph class is exactly characterized by upward-closed families of coordinate-agreement sets, and exact r ecovery with a T -ary tag is equivalent to T -colorability . Repeated composition yields strong powers, so the normalized block-rate sequence con verges to asymptotic Shannon capacity bounded above by Lovász- ϑ . The upper theory is sharp whenever confusability is transitive; meet-witnessing and fiber coher ence provide checkable sufficient conditions for that collapse. Under an affine r estriction, the coordinate structure yields a representable matroid whose rank bounds confusability . The theory yields verifiable structural integrity criteria with applications to programming-language runtimes, databases, and dependency managers. I . I N T R O D U C T I O N The substanti ve question in distributed source coding with side information is what exact ambiguity structure survives once a system exposes only partial observ ations of its source, and what formal structural inte grity guarantees are possible in that regime. Structural integrity here means the code can cor- rect erasures: the mapping from source to observable syndrome is injective, so the latent state is uniquely recoverable from the observable tag. This paper answers that question with a zero- error graph-capacity theory for deterministic partial views on latent tuples. The theory is predictiv e: giv en only the view-family specifi- cation, the full failure topology , e xact recov ery law , asymptotic capacity , and standard upper bounds are deductiv e conse- quences of the architecture, with the realizable graph class fully characterized. When one latent fact or tuple of facts is represented at sev eral modifiable locations, the a vailable observation need not identify a unique latent state. Under partial vie ws, the surviving ambiguity has structure: the admissible view family induces a confusability graph on latent tuples. Its edges record the exact state pairs that the architecture cannot separate, proper colorings quantify the side-information b udget needed for exact recov ery , and repeated composition pushes the same T . Simas is with McGill University , Montreal, QC, Canada (e-mail: tris- tan.simas@mail.mcgill.ca). Manuscript received March 18, 2026. © 2026 T ristan Simas. This work is licensed under CC BY 4.0. License: https://creativ ecommons.org/licenses/by/4.0/ structure to strong powers and asymptotic Shannon capacity . The paper therefore studies not just whether a multi-location system can fail, but the topology of those failures, the integrity guarantees excluded or permitted by that topology , and the recov ery laws forced by it. The resulting theory lies at the intersection of zero-error information theory , side-information source coding, finite conv erses, and structural inte grity analy- sis [ 1 ]–[ 6 ]. Novelty . Unlike W itsenhausen’ s setting where the observation law is given, here a deterministic multi-location partial-view architecture generates that law and thereby constrains the realizable graph class. (L: MFT125- 131 ) The graph-generation mechanism is no w more sharply char - acterized. On the full labeled tuple space, the exact coordinate- view model realizes e xactly the confusability relations de- termined by upward-closed families of coordinate-agreement sets: two distinct tuples are adjacent if and only if their agreement set lies in such a family , and conv ersely ev ery upward-closed family is realizable by some view architecture. (L: MFT125- 128 ) When the alphabet has at least two symbols, ev ery proper agreement set occurs for some distinct tuple pair , so this is a compl ete characterization up to the vacuous full-agreement case of identical tuples. (L: MFT129- 131 ) Coordinatewise alphabet permutations still act by graph automorphisms, and any latent tuple can be carried to any other by such an automorphism. (L: MFT120- 124 ) Hence the induced graphs form a specific monotone agreement-set class rather than an arbitrary finite graph family . The model already generates non-clique graphs, includes the cluster-collapse subclass on which equality is explicit, and is closed under the block-composition operation that becomes strong product on confusability graphs. The binary square already yields a 4 -cycle rather than a clique, and its iterates giv e a concrete infinite non-clique strong-product f amily inside the class. A. Distributed Sour ce Coding F ormulation The model yields an exact-consistenc y analogue of zero- error resolvability for modifiable encodings. Rate is measured by the number of independently writable locations; we refer to this quantity as the independent rate and use degr ees of fr eedom (DOF) only as shorthand. Failur e geometry from partial views. The main jump of the paper is from clique-shaped ambiguity to architecture- specific failure structure. In the multi-fact partial-observation 2 extension, the confusability graph is the structural object con- trolling exact consistency under partial vie ws, exact recovery is equiv alent to graph colorability , and e xact finite weighted success is determined by the maximum T -colorable induced subgraph. The binary two-fact square already shows that the induced graph need not be a clique: it is a 4 -cycle, so failures above the unit-rate boundary are structured rather than uniform. (L: MFT3 , MFT5- 10 , MFT24 ) Asymptotic rate and upper theory . This graph characteriza- tion then lifts to strong powers and asymptotic zero-error rates. The full n -block confusability graph is the n -fold strong power of the one-shot graph; the normalized block-rate sequence con verges to a real asymptotic Shannon-capacity value equal to the supremum of the finite block rates; and that value is bounded abov e by both the logarithm of the complement chromatic number and a fix ed Lo vász- ϑ upper conv ention. The one-shot upper object matches standard orthonormal- representation, primal-PSD, and dual-theta forms. (L: MFT35 , MFT48 , MFT55 , MFT64 , MFT69 , MFT84 ) Equality characterization. The paper also prov es an equality characterization. For the original clique-fiber subclass coming from a surjectiv e label map, asymptotic Shannon capacity and the fixed Lovász- ϑ upper both collapse to log |B | , where B is the fiber -label alphabet. This equality for cluster-graph structure is classical; the ne w point here is that transiti vity of base confusability gives a model-side route from the view- family architecture to the cluster-graph equality case, while meet-witnessing and fiber coherence are stronger sufficient conditions. Under these conditions, the asymptotic Shannon capacity and the fixed Lovász- ϑ upper collapse to the log- arithm of the number of connected components or realized transcript fibers, depending on the structure av ailable. (L: MFT85 , MFT91 , MFT96 , MFT102 , MFT109 ) Finite con verse foundation. Let X be a finite latent source, let Y be the deterministic observation transcript av ailable to a resolver , and let T be a finite auxiliary tag. Exact zero- error recovery from ( Y , T ) is possible exactly when the pair map x 7→ ( Y ( x ) , T ( x )) is injective on the surviving ambi- guity class. The same obstruction appears as confusability , counting, conditional-entropy , decoder-output, and finite-gap formulations, and it admits a budgeted finite-error extension. In particular, exact k -way resolution requires at least log 2 k bits of side information. (L: OBS1- 2 , OBS5 , CIA3 , PRB47 , PRB55 , PRB68 , PRB81 , PRB83 , PRB85 , PRB87 , PRB90 ) Boundary corollary . For deterministic multi-location encod- ings, the zero-incoherence threshold equals 1 : the single- source re gime is the unique no-failure corner , and it is exactly the structural-integrity regime. In coding-theoretic terms, this is the erasure-correcting regime where the syndrome uniquely determines the message. The richer mathematics begins once the finite obstruction surviv es and acquires nontrivial confus- ability structure. (L: COH1- 2 , RAT1 , CAP2- 3 ) Secondary fact-side question. The same framew ork also sup- ports a second structural question: which fact coordinates are determined by others across the realized state family? Under an affine restriction, that coordinate-dependence question be- comes the standard span-membership condition on coordinate functionals, so the induced fact-closure is a representable matroid on fact indices. The value of this specialization here is operational rather than linear -algebraic no velty: the same realized-state model then carries two complementary in variants, a graph on latent states and, in the affine regime, a matroid on coordinates whose rank gives upper bounds on confusability and capacity . Those bounds are algorithmically tractable once the affine family is presented explicitly by a basis or generator matrix for its direction space; under that representation, the relev ant rank quantity is exactly the rank of the restricted coordinate projection and satisfies formal size bounds such as t ( S ) ≤ | S | , so the computation reduces to Gaussian elimination. (L: AFM1- 11 ) B. Realizability and V erification The same deterministic model also raises a realizability question at its opposite boundary: when can a concrete host system realize the rate- 1 corner and thereby certify structural integrity? W e sho w that tw o structural properties are required: 1) Causal update propagation: deriv ed locations must update automatically when the source changes. 2) Prov enance observability: the system must expose enough structural information to verify which locations are authoritati ve and which are derived. C. P aper Or ganization The main theorem arc is: partial views induce a confusabil- ity graph; exact reco very becomes graph colorability; block composition turns that graph into strong powers; the resulting normalized rates con verge to the classical Shannon-capacity value of the induced graph; a fixed Lovász- ϑ upper theory bounds that value; and transitivity of confusability gives the model-side e xport to the classical cluster -graph equality case. The deterministic con verse is the finite foundation of that arc. The af fine and realizability sections are later specializations and boundary consequences of the same model rather than separate primary claims. Section III giv es the minimum model and threshold setup. Section IV develops the finite con verse foundation. Sec- tions V – VIII contain the main graph-capacity arc: graph characterization, block and asymptotic capacity , upper theory , and equality characterization. Section IX then records the affine fact-side specialization as a second lens on the same confusability structure. Section X , Section XI , and Section XII return to the unit-rate boundary of the same model and de velop its structural, operational, and rate-complexity consequences. Appendix A records representati ve host-lev el readings, and Supplement A contains case-study details and traces. A supple- mentary Lean 4 artifact machine-checks the con verse family , graph e xtension, theta upper theory , af fine fact-matroid spe- cialization, threshold chain, operational realizability criterion, and rate-complexity arguments [ 7 ]. D. Scope The scope is finite facts represented at multiple locations under admissible edits. The focus is on structural facts , meaning facts whose encoding locations are fixed when an 3 object, declaration, or artifact is created. This isolates the zero- error regime and captures a wide class of realizations without making the theory domain-specific. E. Contributions The main contributions are grouped into the core graph- capacity arc and the application consequences b uilt on the same model. a) Core gr aph-capacity contributions.: • A multi-fact confusability-graph extension in which partial observations on latent tuples produce non-clique confusability graphs, exact recovery becomes equiv a- lent to graph colorability , success-set exactness becomes equiv alent to induced-subgraph colorability , and the exact finite weighted-success v alue is characterized by the maximum T -colorable induced subgraph. • A strong-power block law and asymptotic Shannon- capacity export in which the n -block confusability graph is the n -fold strong power of the one-shot graph, so the classical Shannon-capacity frame work applies directly to the induced graph family and the normalized block- rate sequence conv erges to the corresponding asymptotic capacity v alue. • A fixed Lovász- ϑ upper theory and standard equiv- alent f orms in which the same asymptotic capacity is bounded abov e by complement-chromatic and fixed Lovász- ϑ upper objects, with standard orthonormal, primal-PSD, and dual-theta forms. • A model-side equality characterization in which the classical cluster-graph equality case is exported back to the original vie w-family model under transitivity of con- fusability , with meet-witnessing as a checkable suf ficient condition and fiber coherence as a stronger one. • A deterministic finite con verse toolkit for e xact con- sistency , presented through pair injectivity together with confusability , counting, conditional-entropy , decoder- output, and entropy-gap formulations of the same finite obstruction, deterministic data processing, and a b udgeted finite-error extension. b) Consequence contributions.: • An affine fact-matroid specialization from the same latent-state framework in which affine realized-state families induce a representable matroid on fact indices: semantic determination becomes span membership, and the resulting matroid rank yields tractable upper bounds for the main confusability/capacity problem. • Threshold and realizability corollaries showing that unit rate is the unique zero-incoherence regime, that unit rate is exactly the structural-integrity re gime, that unit rate has O (1) manual update cost while higher rates incur an Ω( n ) lower bound, and that causal propagation to- gether with pro venance observability giv es an operational criterion for realizable verifiable structural integrity in concrete hosts. • A supplementary machine-checked artifact verifying the theorem chain in Lean 4. (a) Clique-shaped failure T otal ambiguity v 1 v 2 v 3 v 4 Every state confusable with every other (b) Structured failure (4-cycle) Partial ambiguity (0 , 0) (0 , 1) (1 , 1) (1 , 0) not confusable not confusable Some pairs distinguishable requires only 2 tags Fig. 1. T wo failure topologies. In (a), the observ ation pro vides no discrimi- nation, so the confusability graph is a clique: every state is confusable with ev ery other, and exact recovery requires a tag for each state. In (b), partial observations provide some discrimination: opposite corners are distinguishable ev en though adjacent states are not. The 4-cycle is 2-colorable, so exact recovery requires only 2 tags. This structural difference is in visible to the unit-rate threshold alone. I I . F I G U R E S I I I . M O D E L A N D B A S I C Q UA N T I T I E S This section introduces only the minimum structure needed to state the deterministic con verse foundation and its boundary threshold corollary: latent states, deterministic observations, auxiliary tags, ambiguity classes, and independent rate. A. Model assumptions and notation W e use the follo wing standing assumptions. 1) Finite latent state: each fact takes values in a finite alphabet. 2) Deterministic observations: a system state exposes a deterministic observ ation transcript. 3) Independent rate: the independent rate counts the number of independently writable source locations for the same latent fact. 4) Finite side inf ormation: any auxiliary tag is dra wn from a finite alphabet. These are the only ingredients needed for the deterministic zero-error theory de veloped belo w . B. Latent states, ambiguity , and independent r ate An encoding system for a fact F is a finite collection of locations { L 1 , . . . , L n } whose current values jointly determine the observ ation transcript av ailable to a resolver . For a system state x , the ambiguity class is the set of latent values still compatible with that transcript; when the ambiguity class has size greater than one, the transcript alone does not identify a unique authoritati ve v alue. In this single-fact setting, latent- state ambiguity and value ambiguity coincide because the la- tent state is just the fact value. In the later multi-fact extension, ambiguity is instead over full latent tuples compatible with the partial observ ations. A fact is an atomic semantic attribute of the represented object that can vary independently of other attributes, and a fact is structural if the locations encoding it are fixed when the underlying object, declaration, or artifact is created. 4 The admissible edit set E ( C ) is part of the primitive specification of an encoding system C . It records which edit ev ents or edit sequences are considered reachable by the model. The paper does not deriv e E ( C ) from a particular con- currency policy , protocol, or consensus mechanism; those are downstream instantiations. Independence and independent rate are therefore alw ays defined r elative to the chosen admissible edit model . Definition III.1 (Independent Location) . Let E ( C ) denote the admissible edit set of encoding system C . T wo locations encoding the same fact are independent r elative to E ( C ) if some admissible edit sequence in E ( C ) can change one without forcing the other to match it, equiv alently if some admissible edit sequence reaches a state in which the two locations disagree. Definition III.2 (Deriv ed Location) . A location is derived from a source location if updating the source determines the deriv ed value without an additional manual edit. Definition III.3 (Degrees of Freedom) . The de gr ees of fr ee- dom (DOF), or independent rate , of an encoding system for fact F is the number of pairwise independent source locations encoding F under the specified admissible edit set E ( C ) . In the single-fact model, the remaining encoding locations are treated as deri ved vie ws of those independent sources. Definition III.4 (Structural Integrity) . An encoding system has structural inte grity for fact F if ev ery reachable state is coherent, so no reachable system state presents mutually incompatible encodings of F . Definition III.5 (Integrity V iolation) . A reachable inte grity violation is a reachable incoherent state for fact F . Proposition III.6 (Unit Independent Rate Guarantees Coher- ence) . If the independent rate is 1 , then all r eachable states ar e coher ent. (L: COH1 ) Pr oof. At independent rate 1 , e xactly one location is indepen- dent and all remaining locations are derived from it. Deriv ed locations cannot di ver ge from their source, so disagreement is unreachable. ■ Proposition III.7 (Independent Rate Above One Permits In- coherence) . If the independent rate e xceeds 1 , then incoher ent states ar e reac hable. (L: COH2 ) Pr oof. W ith at least two independent locations, admissible edits can assign dif ferent values to those locations, producing a reachable disagreement state. ■ C. Zero-Incoher ence Thr eshold Definition III.8 (Zero-Incoherence Threshold) . The zer o- incoher ence thr eshold is the largest independent rate for which incoherent states are unreachable. Corollary III.9 (Threshold Corollary) . F or determinis- tic multi-location encodings, the zer o-incoher ence threshold equals 1 . (L: RAT1 , CAP2- 3 ) Pr oof. Achie vability . Suppose DOF( C, F ) = 1 , and let L s be the unique independent location. By Definition III.3 , every other encoding location is non-independent and is therefore treated in this model as a derived vie w of L s . By Defini- tion III.2 , updating L s determines the value of each such deriv ed location without an additional manual edit. Hence all encoding locations agree in every reachable state, so zero incoherence is achie ved. Con verse. Suppose DOF( C , F ) = k > 1 . By Defini- tion III.1 , there exist at least two independent locations L 1 , L 2 and an admissible edit sequence that can make them disagree. Equiv alently , choose values v 1  = v 2 and apply edits setting L 1 ← v 1 and then L 2 ← v 2 . By independence, the second edit does not force L 1 to change. The resulting reachable state satisfies v alue( L 1 )  = v alue( L 2 ) and is therefore incoherent. Thus zero-error recov ery is possible exactly at independent rate 1 , so the zero-incoherence threshold equals 1 . ■ Corollary III.10 (Exact Structural-Integrity Threshold) . A deterministic multi-location encoding has structural inte grity if and only if its independent rate equals 1 . (L: COH1- 2 ) Pr oof. If the independent rate is 1 , Proposition III.6 sho ws that all reachable states are coherent, hence Definition III.4 holds. If the independent rate exceeds 1 , Proposition III.7 giv es a reachable incoherent state, so structural integrity fails. ■ This threshold is an early architectural consequence of the deterministic model. It identifies the unique trivial no-failure corner and, equiv alently , the e xact structural-integrity regime. Abov e that boundary , integrity violations are reachable by construction. The deeper theorems of the paper begin once one asks ho w much finite side information is needed whenever the transcript leav es a nontri vial ambiguity class. Corollary III.11 (Side-Information Lower Bound) . If a sur- viving ambiguity class has size k , then exact zer o-err or r eso- lution r equires at least log 2 k bits of side information. (L: CIA1 ) Pr oof. Exact resolution of k survi ving alternati ves requires at least k distinguishable tags, hence at least log 2 k bits. ■ The remainder of the paper sharpens this counting statement into a unified finite conv erse family , asks ho w that one-shot obstruction acquires non-clique topology under partial views, and then develops the corresponding asymptotic capacity and upper-bound theory . I V . U N I FI E D F I N I T E C O N V E R S E This section supplies the one-shot obstruction that the later graph-capacity theory lifts from clique-shaped ambiguity to non-clique failure geometry . Its role is foundational and uni- fying rather than graph-theoretically novel: in the single-fact 5 setting, ev ery surviving ambiguity class is effecti vely clique- shaped under the observation transcript, so the same determin- istic obstruction can be stated as injectivity , clique counting, entropy , decoder -output, and finite-error bounds. The cleanest zero-error formulation starts from the joint observation-tag map itself. The first theorem giv es the basic criterion; the remaining statements package the same obstruction in the forms reused later in the graph and realizability arcs. Theorem IV .1 (Pair -injectivity characterization) . Let X rang e over a finite latent alphabet, let Y be the deterministic observation transcript, and let T be a finite auxiliary tag. Exact zer o-err or reco very fr om ( Y , T ) is possible if and only if the pair map x 7→ ( Y ( x ) , T ( x )) is injective on the ambiguity class under consideration. (L: OBS1- 2 ) Pr oof. Let ϕ ( x ) := ( Y ( x ) , T ( x )) . Necessity . If ϕ is not injectiv e, then there exist distinct latent states x 1  = x 2 with ϕ ( x 1 ) = ϕ ( x 2 ) . Any deterministic decoder D therefore receives the same input on both states and must satisfy D ( ϕ ( x 1 )) = D ( ϕ ( x 2 )) . Since x 1  = x 2 , the common output cannot equal both latent states, so D errs on at least one of them. Sufficiency . If ϕ is injectiv e, define a decoder on the image of ϕ by D ( y , t ) = ϕ − 1 ( y , t ) , with arbitrary extension off the image. Injecti vity makes ϕ − 1 single-valued on the image, and for ev ery latent state x we then have D ( Y ( x ) , T ( x )) = x . Thus exact zero-error recovery is possible. ■ Proposition IV .2 (Confusability formulation) . F ix a determin- istic observation transcript and an L -bit side-information tag. If K latent states induce the same observation transcript, then exact zer o-err or decoding on that ambiguity class r equir es K distinct tag outcomes. Equivalently , a K -way ambiguity class forms a confusability clique whose size cannot exceed the available tag alphabet. (L: OBS5 ) Pr oof. Apply Theorem IV .1 on an ambiguity class on which the observation is constant. Then injectivity of ( Y , T ) reduces to injecti vity of the tag coordinate alone on that class. Since an L -bit tag provides at most 2 L outcomes, the clique size cannot exceed 2 L . ■ a) Global observation-tag budg et.: If the observation alphabet has size O and the auxiliary tag alphabet has size T , then exact zero-error recovery on a latent alphabet of size K requires K ≤ O T . (L: OBS4 ) Pr oof. Let ϕ ( x ) = ( Y ( x ) , T ( x )) . By Theorem IV .1 , exact zero-error recovery requires ϕ to be injective on the latent alphabet under consideration. The codomain of ϕ is the finite product alphabet Y × T , which has cardinality |Y × T | = OT . An injectiv e map from a set of size K into a set of size O T can exist only if K ≤ OT . This is exactly the claimed global budget bound. ■ b) Fiber -level injectivity and cardinality .: Fix an obser- vation value y . On the observation fiber F y = { x : Y ( x ) = y } , exact zero-error recovery forces the tag map to be injectiv e. Consequently , |F y | ≤ |T | . (L: OBS3 ) Pr oof. Fix y and restrict attention to the fiber F y = { x : Y ( x ) = y } . On this set the observ ation coordinate is constant, so for x, x ′ ∈ F y one has ( Y ( x ) , T ( x )) = ( Y ( x ′ ) , T ( x ′ )) ⇐ ⇒ T ( x ) = T ( x ′ ) . Hence Theorem IV .1 implies that exact recov ery on F y is possible only if the tag map x 7→ T ( x ) is injective on that fiber . Since the image of an injecti ve map cannot be lar ger than its codomain, one obtains |F y | ≤ |T | . ■ Proposition IV .3 (Finite counting conv erse) . Let F range over K possible latent values. Suppose exact reco very is attempted fr om a fixed observation transcript together with an L -bit side- information tag . Then K ≤ 2 L . Equivalently , e xact zer o-err or reco very of K ambiguous states r equir es at least log 2 K bits of side information. (L: CIA3 ) Pr oof. This is Theorem IV .2 specialized to a single K -way ambiguity class. ■ The counting bound is the coarsest form of the obstruction. The next step is to weight the same ar gument by source mass. Exact recovery still enforces injectivity on successful confusability classes, but now the question is how much entropy can remain once the source is partitioned into success and failure branches. c) Conditional-entropy formulation.: Let X be uniform on a K -way ambiguity class and let Y be the deterministic observation transcript. If Y is constant on that class, then H ( X | Y ) = log 2 K . In particular , any tag-observ ation pair ( Y , T ) that resolves the class exactly must satisfy H ( X | Y , T ) = 0 and H ( T | Y ) ≥ log 2 K. (L: PRB47 , PRB55 ) Pr oof. If Y is constant on the K -point ambiguity class, then conditioning on Y does not refine the distribution of X . Because X is uniform on that class, this giv es H ( X | Y ) = log 2 K . Exact recov ery from ( Y , T ) means that X is a deterministic function of ( Y , T ) , so H ( X | Y , T ) = 0 . Using the chain rule, H ( X | Y ) = I ( X ; T | Y ) + H ( X | Y , T ) = I ( X ; T | Y ) . Therefore I ( X ; T | Y ) = log 2 K . Since conditional mutual information is bounded by conditional entropy , I ( X ; T | Y ) ≤ H ( T | Y ) , and the lo wer bound H ( T | Y ) ≥ log 2 K follows. ■ 6 d) Mass-weighted clique entr opy bound.: Let S be a success set for a zero-error decoder , and suppose S forms a confusability clique under the observation transcript. If p S denotes the total probability mass of S , then the entropy carried by the successful states obeys X x ∈ S p ( x ) log 2 1 p ( x ) ≤ h 2 ( p S ) + p S log 2 |T | , where h 2 is binary entropy and |T | is the tag alphabet size. (L: PRB55 ) Pr oof. Let B be the indicator of the ev ent { X ∈ S } . Then Pr[ B = 1] = p S , so H ( B ) = h 2 ( p S ) . Decompose the source entropy carried by the successful states by conditioning on B : H ( X ) = H ( B ) + H ( X | B ) . Restricting to the success branch B = 1 , exact recovery on the clique S forces injectivity of the tag map on S , hence at most |T | successful states can be resolved. Therefore H ( X | B = 1) ≤ log 2 |T | . Multiplying by the success probability gi ves a contrib ution of at most p S log 2 |T | from that branch. Combining this with the Bernoulli split term h 2 ( p S ) yields the displayed inequality for the entropy mass carried by the successful states. ■ In particular , on any e xact success set inside a confusability clique, the restricted observ ation-tag map is injecti ve by The- orem IV .1 , so the same entropy inequality applies directly to the successful branch rather than only to the whole clique. (L: OBS1 , PRB55 ) e) Decoder-output and gap formulations.: In the same deterministic finite model, the obstruction can also be ex- pressed as output-entropy and finite-budget gap constraints: any deterministic decoder output ˆ X satisfies H ( ˆ X ) ≤ H ( Y , T ) ≤ H ( X ) , and the observation-tag entropy and decoded-output entropy are bounded by their corresponding finite alphabet ceilings, so the deficits log 2 |Y × T | − H ( Y , T ) , log 2 | b X | − H ( ˆ X ) are nonnegativ e and vanish only in the uniform saturation cases formalized in the supplement. (L: PRB73- 74 , PRB82- 83 , PRB90 , PRB93- 95 ) Pr oof. Because the decoder is deterministic, ˆ X = D ( Y , T ) is a deterministic function of the pair ( Y , T ) . The deterministic data-processing statement belo w therefore gi ves H ( ˆ X ) ≤ H ( Y , T ) . The pair ( Y , T ) is itself a deterministic function of the latent source X , so another application of deterministic data pro- cessing yields H ( Y , T ) ≤ H ( X ) . Since ( Y , T ) takes v alues in the finite alphabet Y × T , one also has H ( Y , T ) ≤ log 2 |Y × T | . Like wise H ( ˆ X ) ≤ log 2 | b X | . Subtracting these entropies from their respective alphabet ceilings gives the nonnegati ve gap quantities claimed in the statement. ■ f) Deterministic data pr ocessing.: Let κ be any deter- ministic coarsening of the observ ation-tag pair ( Y , T ) . Then H ( κ ( Y , T )) ≤ H ( Y , T ) . In particular, since an y deterministic decoder output ˆ X is a coarsening of ( Y , T ) , H ( ˆ X ) ≤ H ( Y , T ) ≤ log 2 |Y × T | . (L: PRB68 , PRB81 , PRB83 ) Pr oof. Write Z = ( Y , T ) . If κ is deterministic, then κ ( Z ) is a function of Z , so the conditional entropy H ( κ ( Z ) | Z ) is zero. By the chain rule, H ( Z ) = H ( κ ( Z ) , Z ) = H ( κ ( Z )) + H ( Z | κ ( Z )) , and since conditional entropy is nonnegativ e, H ( κ ( Z )) ≤ H ( Z ) . T aking κ to be the decoder map giv es H ( ˆ X ) ≤ H ( Y , T ) . Finally , because Z ranges ov er the finite alphabet Y × T , one has the standard finite-alphabet ceiling H ( Y , T ) ≤ log 2 |Y × T | . ■ This theorem is the structural reason the output and gap for- mulations are not independent embellishments of the counting con verse. Once the observation is coarsened, ev ery derived representation inherits the same obstruction through determin- istic data processing rather than escaping it. (L: PRB68 , PRB81 , PRB83 , PRB90 ) Proposition IV .4 (Equiv alence viewpoint) . The confusability , counting, conditional-entr opy , decoder-output, and finite-gap statements above are differ ent normal forms of the same deterministic finite zero-err or obstruction: a surviving K -way ambiguity class r equir es a b udget of at least log 2 K bits to be r esolved exactly . (L: OBS1 , OBS5 , CIA3 , PRB47 , PRB55 , PRB68 , PRB81 , PRB83 , PRB90 ) Pr oof. Pair injectivity is the structural core. The confusability and counting theorems express the injecti vity requirement combinatorially; the conditional-entropy and weighted-entropy theorems express it in source-mass coordinates; the decoder- output and gap theorems express it through deterministic coarsening and finite alphabet ceilings; and the finite-error theorem is the relaxed version obtained by allo wing a nonzero failure branch. Each theorem therefore measures the same failure of exact isolation in a different coordinate system. ■ A. Finite-err or extension The main body of the paper studies exact zero error . The same deterministic finite model also supports a b udgeted finite- error extension, which should be read as a deterministic finite analogue of the classical F ano line of ar gument rather than as a separate asymptotic coding theorem [ 6 ], [ 8 ], [ 9 ]. Proposition IV .5 (Budgeted finite-error con verse) . Let P e denote the decoder err or probability on a finite source of size K , observed thr ough a deterministic tr anscript alphabet of size O together with a tag alphabet of size T . Then the decoded- output entr opy obeys H ( ˆ X ) ≤ h 2 ( P e ) + (1 − P e ) log 2 ( O T ) + P e log 2 ( K − 1) . 7 (L: PRB85 , PRB87 ) Pr oof. Partition the source into success and failure events. On the success branch, the decoded output is constrained by the observation-tag budget and contributes at most log 2 ( O T ) . On the failure branch, the decoder can still output at most one of the remaining K − 1 alternativ es. The Bernoulli split between the tw o branches contrib utes the binary-entropy term. Equiv alently , H ( ˆ X ) = H ( ˆ X | success/failure ) + H ( success/failure ) is bounded by combining the support bound on each branch with the binary entropy of the branch variable itself. ■ The zero-error theorems are the P e = 0 boundary of this inequality . In that regime the success branch occupies all probability mass, the Bernoulli term vanishes, the failure contribution disappears, and the budget collapses back to the unified finite con verse abov e. The ne xt section asks what surviv es when admissible views separate some latent alternatives but not others. In that regime the same one-shot obstruction is no longer represented by a single clique; it becomes an induced confusability graph that can carry genuinely non-clique failure structure. V . G R A P H C H A R AC T E R I Z A T I O N O F P A R TI A L V I E W S Before defining the general confusability graph, we develop a concrete example: partial observations can induce nontrivial failure topology . A. A Running Example: The Binary Squar e Consider a system with tw o binary facts ( x 1 , x 2 ) ∈ { 0 , 1 } 2 , giving four latent states: (0 , 0) , (0 , 1) , (1 , 0) , (1 , 1) . Suppose the architecture e xposes two partial vie ws: • V iew 1 reveals only coordinate x 1 • V iew 2 reveals only coordinate x 2 a) Confusability structur e.: T wo states are confusable when at least one vie w fails to separate them: • Through V iew 1: states (0 , 0) and (0 , 1) are confusable (both have x 1 = 0 ), and states (1 , 0) and (1 , 1) are confusable (both ha ve x 1 = 1 ). • Through V iew 2: states (0 , 0) and (1 , 0) are confusable (both have x 2 = 0 ), and states (0 , 1) and (1 , 1) are confusable (both ha ve x 2 = 1 ). The confusability graph is therefore a 4-cycle: (0 , 0) ∼ (0 , 1) ∼ (1 , 1) ∼ (1 , 0) ∼ (0 , 0) , where opposite corners (0 , 0) and (1 , 1) , as well as (0 , 1) and (1 , 0) , are not adjacent. This is not a clique: the architecture can distinguish some pairs ev en though it cannot distinguish all pairs. b) Exact recovery .: Because the graph is 2-colorable (e.g., color by parity c ( x 1 , x 2 ) = x 1 ⊕ x 2 ), exact recovery is possible with a 2-ary auxiliary tag. Howe ver , the graph is not 1-colorable (it contains edges), so a 1-tag decoder must fail on at least one confusable pair . Under the uniform source, the best 1-tag exact success probability is 1 / 2 (choose either diagonal as the success set). c) What this example shows.: 1) Partial views induce structured confusability , not uni- form ambiguity . 2) The confusability graph captures exactly which state pairs the architecture cannot separate. 3) Graph-theoretic properties (chromatic number , indepen- dence number) determine e xact recovery costs. Coding-theoretic interpretation. This 4-cycle structure has a direct interpretation in error-correcting codes. Coloring by parity c ( x 1 , x 2 ) = x 1 ⊕ x 2 is exactly a syndrome: the tag rev eals whether the two bits match. W ith one syndrome bit ( T = 2 ), we can decode perfectly because each color class contains non-adjacent states. W ith no tag ( T = 1 ), we cannot distinguish adjacent vertices, exactly as in the classic binary symmetric channel without feedback. The confusability graph is precisely the confusion graph of a code with parity-check matrix H = [1 1] : adjacent vertices are those the code cannot separate. Figure 1 contrasts this structured failure with the clique- shaped failure that arises when no observ ation pro vides any discrimination. The theorems below generalize this phe- nomenon to arbitrary multi-fact partial-view systems. B. General Graph Char acterization This section turns the one-shot obstruction into the paper’ s main structural object: the confusability graph induced by admissible partial views. The single-fact con verse family treats ambiguity classes that are ef fectiv ely clique-shaped under the observation transcript. One extension is to let the latent state be a tuple of facts and let each location rev eal only a subset of coordinates. Failure then becomes structured: some latent pairs remain indistinguishable and others do not, so the resulting confusability graph need not be complete. In this extension, ambiguity is now over full latent tuples rather than single fact values: two tuples are confusable when the allowed partial observations fail to separate them. The induced graph is therefore a failure map for the architecture. The graph can be handled either explicitly or implicitly . If there are d facts over an alphabet of size q , then the latent state space has cardinality q d , so any explicit confusability graph materialization already starts from an exponentially large verte x set. (L: MFT113 ) A naive explicit construction ranges over at most q 2 d ordered state pairs. (L: MFT116- 117 ) For that reason the appropriate algorithmic interface is implicit: the formalization packages adjacency as a Boolean confusability oracle on state pairs, proves that this oracle is equiv alent to the confusability relation itself, and also packages an equiv alent agreement- set oracle together with a linear bound on the total view- membership work needed by the direct scan implementation. (L: MFT114- 115 , MFT132- 135 ) Under the standard explicit representation 8 in which a latent tuple is stored as a length- d coordinate array and each admissible vie w is stored as its coordinate subset, this one-shot oracle is therefore computable by direct inspection in deterministic time O ( d + P ℓ | V ℓ | ) , hence O ( Ld ) for L views: compute the agreement set of the two tuples once, then scan the vie ws and test whether some view is contained in that agreement set. The theory belo w is therefore compatible with implicit graph access ev en when explicit graph materialization is impractical, while still leaving the one-shot adjacency test itself polynomial in the size of its explicit input. The deterministic restriction is also not fundamental at the graph-construction lev el. The Lean dev elopment formalizes a support-ov erlap extension in which each location may emit any observ ation from a finite support set, and two states are confusable when some admissible location has ov erlapping observation support on those states. The present deterministic partial-view model embeds into that broader framework as the singleton-support special case, so the graph construction itself already e xtends be yond e xact deterministic views even though the paper develops only the deterministic architectural arc. (L: MFT110- 112 ) At the same time, the e xact current coordinate-subset model is not graph-univ ersal on the full tuple space, and the induced graph class can be characterized more sharply than mere v ertex transitivity . The formalization first shows that equality at a view is equi valent to containment of that view inside the coordinate-agreement set of the two tuples, so confusability depends only on which coordinates agree, not on the actual symbols carried there. (L: MFT118- 119 , MFT122 ) From any view family one therefore obtains an upward-closed family of coordinate subsets, U := { S ⊆ [ F ] : ∃ ℓ with V ℓ ⊆ S } , such that two distinct tuples are adjacent if and only if their agreement set lies in U . Con versely , ev ery upward-closed family U arises from some coordinate-vie w architecture, so on the full labeled tuple space the realizable confusability relations are exactly the agreement-set relations cut out by upward-closed families. (L: MFT125- 128 ) When the alphabet has at least two symbols, every proper coordinate subset occurs as the agreement set of some distinct pair of tuples, so this characterization is complete up to the irrelev ant full- agreement case corresponding to identical tuples. (L: MFT129- 131 ) Equiv alently , coordinatewise alphabet permutations preserve the confusability relation and induce graph automorphisms; for any two latent tuples there exists such an automorphism carrying one to the other . (L: MFT120- 124 ) Thus the exact model realizes a specific monotone agreement-set graph class rather than an arbitrary finite graph family . In particular , non-vertex- transitiv e graphs such as a three-verte x path cannot arise as full confusability graphs in this exact model. What remains open is a classification up to unlabeled graph isomorphism, and the corresponding classification for restricted realized-state subsets or for the stochastic support-o verlap e xtension. Theorem V .1 (Exact recovery as graph colorability) . F or a finite multi-fact latent tuple observed thr ough a finite family of partial-coor dinate views, e xact zer o-err or reco very with a T - ary auxiliary tag e xists if and only if the induced confusability graph is T -colorable . (L: MFT3- 4 , MFT20 ) Pr oof. Let G conf be the confusability graph. Necessity . Suppose exact recovery with T tags is possible. If two latent tuples are adjacent in G conf , then by definition some allo wed observation lea ves them indistinguishable. Exact recov ery therefore cannot assign them the same tag, because the joint observation-tag pair would then coincide on two distinct latent tuples and Theorem IV .1 would be violated. Hence the tag assignment is a proper T -coloring of G conf . Sufficiency . Suppose c is a proper T -coloring of G conf . Define the auxiliary tag of a latent tuple to be its color . If two latent tuples produce the same observ ation and the same color , then they are confusable and adjacent, contradicting proper colorability . Therefore the joint observation-color map is injecti ve, and Theorem IV .1 yields an exact decoder . ■ An edge of G conf records a specific pair of latent states that the view architecture cannot separate exactly , and a proper coloring is exactly a disambiguation budget that resolves those failures. Exact recovery cost is determined by failure topology , not by the count of sources alone. This statement is packaged through an explicit simple con- fusability graph object, and for n repeated copies the full block confusability graph is identified with the n -fold strong po wer of the one-shot confusability graph. The zero-error criterion is therefore literally a graph-colorability theorem on an explicit power family rather than only an analogy . (L: MFT34- 35 ) Theorem V .2 (Success-set characterization) . In the same model, e xact decoding on a designated success set S is equiv- alent to T -colorability of the induced confusability subgraph on S . (L: MFT9 ) Pr oof. Restrict the argument of Theorem V .1 to the success set. Exactness is required only on S , so the coloring condition is required only on confusable pairs inside S . ■ Theorem V .3 (Exact finite-error success budget) . F or each tag alphabet size T > 0 , ther e is a lar gest success-set size M T := max {| S | : S induces a T -colorable confusability subgraph } . An exact decoder with T tags exists on a success set S if and only if | S | ≤ M T after r eplacing S by some T -color able success set of size M T . Under the uniform sour ce, the optimal exact success pr obability is ther efor e M T / |X | . (L: MFT14- 15 ) Pr oof. Theorem V .2 identifies e xact success sets with induced T -colorable subgraphs. Since the latent alphabet is finite, a maximum-cardinality such set exists. The mechanized charac- terization exposes this optimum directly . In the binary four- cycle witness, the optimum at T = 1 is exactly two states, so the best uniform e xact success probability is 2 / 4 = 1 / 2 . ■ The same characterization is also av ailable in weighted form. For any finite source weight w on the latent alphabet, 9 the maximum mass of a T -colorable induced subgraph is the exact finite success value under budget T . This is the weighted version of the same structural claim: under limited budget, the best exact decoder is the largest colorable safe subworld allowed by the induced failure graph. (L: MFT18- 19 ) Equiv alently , this optimum can be packaged as an explicit exact finite rate-distortion v alue for the extended model at tag budget T : every exact success set has weighted mass at most this v alue, and some exact decoder attains it. Thus the extended model no longer has only a con verse inequality; at the one- shot lev el it has an exact weighted success law rather than merely an upper bound. (L: MFT23- 24 ) Theorem V .4 (First non-clique witness) . Ther e exists a bi- nary two-fact system with single-coor dinate observation views whose confusability graph is not a clique. In that system, two tags suffice for exact reco very , one tag is impossible, and any one-tag decoder can be exact on at most half of the four latent states under the uniform sour ce. (L: MFT5- 8 , MFT10 ) Pr oof. The witness is the four-state binary square with one location rev ealing the first coordinate and the other rev ealing the second. States that agree on one coordinate are confusable through the corresponding location, but opposite corners are not confusable, so the graph is a 4 -cycle rather than a clique. An explicit exact two-tag decoder is obtained by the parity coloring c ( x 1 , x 2 ) = x 1 ⊕ x 2 . W ith one tag, an y exact success set must be an independent set of the cycle, hence has size at most two, yielding success probability at most 1 / 2 under the uniform source. ■ Theorem V .5 (Block-composition law) . If two partial- observation systems ar e colorable with tag alphabets of sizes T 1 and T 2 , then their block-composed pr oduct system is colorable with a tag alphabet of size T 1 T 2 . (L: MFT11- 13 , MFT16 ) For later use, recall the strong product definition: if G and H are graphs, then G ⊠ H has verte x set V ( G ) × V ( H ) , and ( u, v ) is adjacent to ( u ′ , v ′ ) exactly when either u = u ′ and v ∼ v ′ , or u ∼ u ′ and v = v ′ , or u ∼ u ′ and v ∼ v ′ . The block- composition theorem is model-specific because the admissible product vie ws generate exactly these three adjacency cases. Pr oof. T ake proper colorings of the two component confus- ability graphs and encode the pair of colors as one color in the product alphabet. If two product states are confusable, then some allowed product view fails to separate them. In the product system this means that either the first coordinates agree and the second coordinates are confusable, or the second coordinates agree and the first coordinates are confusable, or both coordinates are confusable. These are e xactly the three adjacency cases in the strong product G 1 ⊠ G 2 . Hence ev ery confusable product pair is separated by at least one component coloring, and therefore by the paired color as well. ■ This product law now has both directions in the formal dev elopment. The upper direction is the multiplicati ve coloring construction abov e. More sharply , once both location alphabets are nonempty , the product confusability graph is exactly the strong product of the tw o component confusability graphs. The graph object is therefore not just analogous to zero-error graph products; it is literally a strong-product construction for the induced failure map. (L: MFT25- 26 ) The lower direction is clique-based: if the first component contains a confusability clique of size c 1 and the second contains one of size c 2 , then the product system contains a confusability clique of size c 1 c 2 , so any exact product decoder requires at least c 1 c 2 tags. The same mechanism also propagates to finite-error success sets: the largest exactly decodable success set in the product system is at least the product of the largest exactly decodable success sets in the two components. At the one-shot zero-error coding le vel, the maximum independent-set code size also obeys the same product lower bound. Thus the extended model now has an honest strong-product graph law together with matching lower/upper budget consequences. (L: MFT17 ) At this point the induced graph class has four explicit structural features. It contains non-clique graphs, as witnessed by the binary square. It contains the cluster-collapse subclass on which the later equality theorem is exact. It is closed under the block-composition operation, which becomes strong product on confusability graphs. And, by the agreement-set characterization abo ve, it is e xactly a monotone agreement-set class rather than an arbitrary graph family on the full tuple space. The iterated family below packages these features into a concrete infinite class. a) Example family: iterated binary squar es.: The four- state witness above is only the smallest member of a larger model-generated family . Let W denote the binary square system of Theorem V .4 . Composing m independent copies of W yields a system with 2 m binary fact coordinates and 4 m latent states whose confusability graph is the m -fold strong power C ⊠ m 4 . Because each copy of W is exactly recov erable with two tags, repeated application of Theorem V .5 giv es exact recovery on the m -fold product with 2 m tags. Con versely , each copy contains a confusability clique of size 2 , so the clique lower bound in the same product theorem yields a confusability clique of size 2 m in the product system. Hence every exact decoder for this f amily requires at least 2 m tags, and therefore the exact tag budget is T exact ( W ⊗ m ) = 2 m . This provides an infinite family of non-clique examples, gener- ated directly by the model, on which the graph object, the exact budget, and the strong-product law all interact nontrivially rather than only in the single four-c ycle case. At this point the paper has mo ved decisi vely beyond the trivial threshold story: abov e the unit-rate corner , not all failures are equally severe, because the architecture can induce a non-clique topology with a strictly more structured reco very law . (L: MFT11- 13 , MFT16- 17 , MFT29- 30 ) Once exact one-shot recovery is governed by graph col- orability , repeated composition lifts the theory to strong graph powers and asymptotic zero-error rates. The next subsection makes that lift explicit. 10 V I . B L O C K C O M P O S I T I O N , S T RO N G P OW E R S , A N D A S Y M P T OT I C C A P AC I T Y This section studies how the induced failure topology scales. Once exact recovery is gov erned by colorability of the one- shot confusability graph, repeated composition does not reset the ambiguity; it composes the same failure structure across blocks. The next results identify the correct strong-power object and the resulting asymptotic zero-error rate. A. Motivation: Repeated Composition What happens if we repeat the partial-view experiment n times? If we compose n independent copies of the same encoding system, does the ambiguity explode uncontrollably , or does it gro w at a predictable rate? The answer depends on the failure topology: • If the system were fully incoherent (confusability graph is a clique), e very block would multiply the confusion. The ambiguity would grow without any exploitable structure. • Because our confusability graph has structure (as in the 4-cycle example of Section V -A ), the ambiguity growth is controlled by that structure. Some state pairs remain distinguishable e ven across multiple blocks. Under n -fold block composition, the confusability graph becomes its n -fold str ong power : G n = G ⊠ n . Independent sets in the strong power relate systematically to independent sets in the base graph, yielding a supermultiplica- tiv e growth law that con ver ges to a well-defined asymptotic rate. The theorem belo w quantifies this rate. B. Block Composition Law A genuine supermultiplicati ve block law holds: α m + n ≥ α m α n , where α n denotes the largest one-shot zero-error code size in the n -block composed system. This is the correct discrete precursor to Shannon-capacity analysis: the admissible code- size sequence is already supermultiplicative at the graph level. (L: MFT22 ) More sharply , the block-lev el graph itself factors correctly . Confusability between concatenated ( m + n ) -block states is equiv alent to adjacency in the strong product of the m - block and n -block confusability graphs, packaged formally as an explicit graph isomorphism. Thus the repeated-block system is not only bounded by strong-product arguments at the codebook le vel; it is literally a strong-product graph-power object for the same induced failure map. (L: MFT29- 30 , MFT38- 39 ) a) Conjunction, not disjunction.: Block confusability requires a single location tuple ℓ = ( ℓ 1 , ℓ 2 ) that works simultaneously for both blocks: • Block 1: observe location ℓ 1 , must match • Block 2: observe location ℓ 2 , must match T wo ( m + n ) -block states are confusable iff there exists one location tuple making observations match across all blocks. This is not “confusable in block 1 OR confusable in block 2” but rather “ ∃ location tuple, ∀ blocks: match. ” That univ ersal quantification ov er blocks is exactly the strong product condi- tion: for each block, either equal or confusable (matching on some location), with at least one block strictly confusable. b) Concrete example.: Consider two copies of the binary square. State ((0 , 0) , (0 , 0)) vs ((0 , 1) , (1 , 1)) : • Block 1: (0 , 0) ∼ (0 , 1) in C 4 (agree on x 1 ) • Block 2: (0 , 0) ∼ (1 , 1) in C 4 (disagree on both coordi- nates) An “OR-product” would mark these confusable because block 1 matches. But the actual model requires one location pair ( ℓ 1 , ℓ 2 ) matching both blocks. Block 1 can match via ℓ 1 = 1 (observing x 1 ). Block 2 cannot match: location 1 gives 0 vs 1 , location 2 gi ves 0 vs 1 . No location pair works, so these states are not confusable, e xactly as the strong product predicts. Iterating the same construction yields the deri ved n - block lo wer bound. If α 1 denotes the one-shot maximum independent-set code size, then the mechanized block-power theorem gi ves α n 1 ≤ α n , where α n is the largest one-shot zero-error code size in the n -fold block-composed system. This is the first asymptotic- capacity scaffold in the extended model: repeated block com- position already forces the expected exponential growth law before an y Shannon-capacity or ϑ -style upper theory is intro- duced. Once the architecture determines the one-shot failure graph, that same graph already controls the asymptotic lo wer law . (L: MFT21 ) For notation, write Θ( G ) for the Shannon capacity of a confusability graph G in logarithmic units, and write ϑ ∞ ( G ) for the corresponding asymptotic Lovász- ϑ upper value. When the object is a vie w family V rather than an abstract graph, we write Θ( G V ) and ϑ ∞ ( G V ) for the same quantities computed on the induced confusability graph G V . These normalized block rates can be packaged into an explicit lower asymptotic capacity en velope C ∗ := sup n ≥ 1 log α n n , implemented as an ENNReal supremum of the positive block- rate sequence. Formally , α n is identified directly with the maximum finite independent-set size of the n -block confus- ability graph, the block graph is identified with the n -fold strong power of the one-shot confusability graph, and the same en velope is prov ed equal to the graph-side Shannon lower - capacity expression b uilt from those strong powers. (L: MFT31- 32 , MFT35- 36 ) The asymptotic scaf fold is also not limited to a lower en ve- lope statement. On the graph side itself, strong-product lower bounds for independent-set cardinality induce monotonicity of normalized strong-power rates along repeated multiples. Specializing those generic graph theorems back to the model- generated confusability graph yields the repeated-block state- ment belo w . (L: MFT40- 42 ) Repeating a fixed m -block system k times yields α k m ≤ α km , 11 and hence log α m m ≤ log α km k m . Thus rates along repeated block multiples are monotone from below toward the same capacity en velope. This is still a lo wer theory rather than a full Shannon-capacity theorem, but it is already a genuine asymptotic graph-rate statement for the induced failure topology . (L: MFT33 ) Once the block confusability graph has been identified with strong powers of the one-shot graph, the asymptotic rate theory is exactly the classical Shannon-capacity framew ork specialized back to the model-generated graph family . Theorem VI.1 (Asymptotic Shannon-capacity theorem) . Let α n denote the lar gest one-shot zer o-err or code size in the n - block composed system. Then the normalized block rates log α n n con verge to a r eal asymptotic Shannon-capacity value, and that value is equal to the supremum of the finite bloc k-rate sequence. (L: MFT43- 49 , MFT56 ) Pr oof. The argument rests on two observations: supermulti- plicativity of independent-set sizes under strong product, and Fekete’ s lemma. Supermultiplicativity . Recall that the n -block confusability graph G n is the n -fold strong power G ⊠ n of the one-shot graph G . For any two graphs H and K , an independent set in H ⊠ K can be constructed from independent sets in H and K as follows. If I H ⊆ V ( H ) and I K ⊆ V ( K ) are independent, then the Cartesian product I H × I K ⊆ V ( H ) × V ( K ) is independent in H ⊠ K : if ( u, v ) and ( u ′ , v ′ ) are adjacent in the strong product, then either u = u ′ and v ∼ v ′ , or u ∼ u ′ and v = v ′ , or u ∼ u ′ and v ∼ v ′ . In each case at least one of u, u ′ or v , v ′ is an edge within the respective independent set, which is impossible. Hence α ( H ⊠ K ) ≥ α ( H ) α ( K ) . Iterating gives α m + n = α ( G ⊠ ( m + n ) ) ≥ α ( G ⊠ m ) α ( G ⊠ n ) = α m α n , so the sequence { α n } is supermultiplicati ve. F ekete conver gence. The inequality α m + n ≥ α m α n is equiv alent to subadditivity of the sequence {− log α n } . The trivial bound α n ≤ | V ( G ) | n ensures that − log α n ≥ − n log | V ( G ) | , so the sequence is bounded below . By Fekete’ s subadditivity lemma, the normalized limit lim n →∞ log α n n = sup n ≥ 1 log α n n exists and equals the supremum of the finite block-rate se- quence. This limit is the asymptotic Shannon capacity Θ( G ) in logarithmic units. The mechanized development packages the same argument with explicit sequence objects and con vergence lemmas; the textual expansion abov e makes the independent-set product construction and Fekete application explicit for accessibility . ■ c) Running e xample: Binary squar e.: For the 4-cycle C 4 from Section V -A , the independence number is α ( C 4 ) = 2 (choose either diagonal). The strong po wer C ⊠ n 4 has α ( C ⊠ n 4 ) = 2 n , giving normalized rate (log 2 n ) /n = log 2 . The asymptotic Shannon capacity is therefore Θ( C 4 ) = log 2 . This matches the exact tag-b udget growth from the iterated binary-square family discussed earlier: the one-shot budget is 2 , and the m -block e xact budget grows as 2 m . d) Relation to graph capacity theory .: This places the e x- tended model directly in the classical zero-error graph-capacity setting. The difference from the classical channel picture is generativ e rather than formal: here the confusability graphs arise from deterministic view families and partial observ ations on latent tuples. W e do not claim a new general Shannon- capacity theorem for arbitrary graphs; the new content is that this partial-view model reaches the classical graph machinery in a non-clique regime, so the architecture determines the graph once and that graph then determines both one-shot and asymptotic recov erability . The clique-fiber regime recovers the classical cluster-graph equality case, while the transitivity criterion identifies when that case is produced by the view- family model. The non-clique witness already sho ws that the model also generates genuinely nontri vial graph behavior . V I I . T H E TA U P P E R T H E O RY This section supplies the asymptotic upper law for the same induced failure graph. The pre vious section showed that repeated composition turns view-induced ambiguity into strong po wers and hence into an asymptotic reco verability rate. The question now is how large that rate can be. The first upper bound is complement-chromatic. The stronger package identifies the same one-shot upper inv ariant with standard orthonormal, primal-PSD, and dual-theta forms and then lifts those bounds to strong po wers. The complement-chromatic bound is the first upper bound. The complement of the strong-power confusability graph is identified with the coordinatewise “there exists an offending coordinate” graph on the complement, and a coloring of the one-shot complement graph lifts to a coloring of every strong power by coloring each coordinate separately . Consequently , α ( G ⊠ n ) ≤ χ ( G ) n , log α ( G ⊠ n ) n ≤ log χ ( G ) , and the asymptotic capacity is bounded abov e by log χ ( G ) . (L: MFT50- 55 ) At the one-shot lev el, the same upper in variant is matched with standard witness, orthonormal-representation, primal- PSD, and dual-theta formulations by explicit feasible-object equiv alences. Applying these one-shot bounds to strong po w- ers yields subadditi ve upper-rate sequences, so Fekete’ s lemma giv es asymptotic primal and dual upper v alues, each equal to the infimum of its finite power -rate bounds. The finite block rates are bounded abo ve by either sequence, hence Θ( G ) ≤ ϑ ∞ ( G ) . The same induced topology that governs exact reco very there- fore also carries a classical asymptotic upper theory . The 12 formal dev elopment proves these one-shot equiv alences and the resulting asymptotic primal/dual upper laws. (L: MFT57- 84 ) Interpretationally , the complement-chromatic bound is the first combinatorial ceiling, while the Lovász- ϑ package sup- plies the sharper geometric and semidefinite upper theory attached to the same confusability graph. Once a deterministic partial-view architecture generates a f ailure graph, the standard zero-error upper hierarchy attaches to that graph without further model-specific modification. What remains open is the sharpness of this upper theory for the graph classes generated by the extended model, not the existence of standard primal–dual theta packaging. V I I I . E Q U A L I T Y C H A R A C T E R I Z A T I O N This section asks when the upper theory of the induced failure graph is exactly sharp. The graph-theoretic equality facts used here are classical for cluster graphs; the model- specific question is when the partial-view architecture gener- ates that collapse regime. T ransitivity of confusability is the key structural condition exporting the view-family model to the cluster-graph equality case. Open question. A precise open problem is to char- acterize which upward-closed agreement-set families pro- duce transitiv e confusability , and therefore trigger the cluster-graph equality collapse. W e have identified bound- ary facts: the binary square is a small witness where transitivity fails, while trivial single-view families and the fiber-coherent/meet-witnessed families prov ably produce tran- sitiv e confusability and hence the equality collapse. A com- plete classification of upward-closed families that yield tran- sitivity remains open and is an attractiv e direction for further work. The original clique-fiber re gime is now isolated as an equal- ity subclass rather than only a threshold corollary . If a graph is generated by a surjecti ve label map x 7→ b ( x ) with adjacency exactly between distinct vertices in the same fiber , then one representativ e from each fiber forms an independent set of size |B | , while the complement graph is colorable by the same label map using |B| colors. This equality for cluster-graph structure is classical; the mechanized development packages the model- specific export needed here into a restricted-class theorem: C ( G ) = ϑ ∞ ( G ) = log |B | . Thus the extended theory now contains both a genuine non- clique graph regime with strict upper/lower separation and a clean equality regime recovering the original fiber picture inside the same asymptotic Lovász- ϑ framework. In partic- ular , transitivity of confusability exports the model-generated graph directly to the classical cluster-graph setting through the following mechanized implication: confusability is transiti ve = ⇒ G conf = Cluster(Π cc ) . Hence, if confusability is transiti ve, the base confusability graph is exactly the cluster graph on connected components, and the model-specific equality theorem yields Θ( G V ) = ϑ ∞ ( G V ) = log | Π cc | , where Π cc is the connected-component partition of the base confusability graph. Between arbitrary base models and full fiber coherence, a checkable sufficient condition is that the view family be meet-witnessed : ev ery pair of allowed vie ws contains another allo wed view inside their intersection. Under that condition, any two confusability witnesses can be pulled back to a common subvie w , so confusability is transitive and the same equality follows. Fiber coherence is then a stronger sufficient structural condition: if equality at one allowed lo- cation already fixes the full observation transcript, transitivity follows, the connected components are exactly the realized transcript fibers, and the equality sharpens further to Θ( G V ) = ϑ ∞ ( G V ) = log |T real | . At the witness le vel, this same collapse admits a local compo- sition form: base confusability is the edge union of the single- view cluster graphs, and transitivity is equiv alent to closure of those fixed-view witnesses under two-step composition through an intermediate state. This is the exact local criterion for the current equality mechanism: the collapse to a cluster graph is determined by how single-vie w witnesses compose, not by an opaque global coincidence of the full confusability graph. Thus the equality question is structural: it identifies when genuinely non-clique failure geometry collapses to exact cluster behavior . (L: MFT85- 91 , MFT100- 102 , MFT107- 109 , MFT136- 137 ) a) Running example: Binary square .: The binary square of Section V -A does not ha ve transitiv e confusability: (0 , 0) ∼ (0 , 1) and (0 , 1) ∼ (1 , 1) , but (0 , 0) ∼ (1 , 1) . The 4- cycle is not a cluster graph, so the transitivity-based equality collapse does not apply . In this special case the classical graph inv ariants still satisfy Θ( C 4 ) = ϑ ∞ ( C 4 ) = log 2 , b ut that coincidence no longer comes from the cluster-collapse mechanism isolated in this section. The binary square therefore marks the boundary of the paper’ s equality route: it is the first non-clique witness, yet not a transitive one. Hierarch y summary . This equality route has one transitivity- based sufficient condition and two stronger suf ficient condi- tions. The table below records only the logical structure and the resulting capacity v alue. Condition Logical status Structural consequence Capacity value Fiber coherence Stronger sufficient condition Connected compo- nents are realized transcript fibers log |T real | Meet-witness Sufficient for transitivity and collapse Base confusability collapses to the component cluster graph log | Π cc | T ransitivity Sufficient collapse condition used here G conf = Cluster(Π cc ) log | Π cc | I X . A FFI N E D U A L : C O O R D I NA T E M A T R O I D The preceding sections dev eloped the main graph-capacity and equality arc for partial views. This section turns to a second lens on the same model: which fact coordinates 13 determine others across the realized state family? Under an affine restriction on the realized state family , that determi- nation problem reduces to standard linear algebra and yields a representable matroid on fact indices. This matroid is the coordinate-side dual of the same structure that generates the confusability graph, and it provides computationally tractable upper bounds on confusability and capacity . This matroid perspectiv e connects the problem to the rich literature on representable matroids in coding theory and combinatorial optimization, where matroid rank functions provide tractable bounds on code parameters. Proposition IX.1 (Affine fact-matroid specialization) . Assume the r ealized state family is af fine over a field, so that the valid latent tuples form an affine translate of a linear subspace of the ambient fact space. F or any fact set S and fact index i , semantic determination of i by S is equivalent to membership of the coor dinate functional for i in the linear span of the coor dinate functionals inde xed by S . Consequently the fact indices carry a repr esentable matr oid: a fact set is independent exactly when its coordinate functionals are linearly independent, and the minimal determining fact sets ar e exactly the bases, all of common cardinality equal to the rank. (L: AFM1- 5 ) Pr oof. Let the affine realized-state family be A = a 0 + V , where a 0 is a fixed origin and V is a linear subspace of the ambient fact space. Fix a fact set S and a fact index i . By definition, fact i is semantically determined by S exactly when for all x, x ′ ∈ A , agreement on ev ery coordinate in S forces agreement on coordinate i . Writing d = x − x ′ , this is equiv alent to the statement that every direction vector d ∈ V whose coordinates in S vanish also satisfies d i = 0 . Let e j denote the coordinate functional for fact j . The condition abov e says precisely that V ∩ \ j ∈ S k er( e j ) ⊆ ker( e i ) . In finite-dimensional linear algebra, this is equiv alent to e i lying in the span of the coordinate functionals { e j : j ∈ S } . Thus semantic determination by S is exactly span membership of the corresponding coordinate functional. The coordinate functionals therefore realize a representable matroid on fact indices. In that matroid, independence is linear independence of the coordinate functionals, bases are the maximal independent spanning sets, and ev ery basis has the common rank cardinality . Since span corresponds exactly to semantic determination, the minimal determining fact sets are precisely those bases. ■ The matroid and the confusability graph are complemen- tary objects induced by the same realized-state model. Un- der the common specialization to finite affine families with coordinate-projection vie ws, the matroid rank provides compu- tationally tractable upper bounds on confusability and capac- ity . The tractability claim is representation-sensiti ve and should be read that way . W e assume the affine family A = a 0 + V is giv en by an explicit linear presentation of V , for example a basis or generator matrix over F q , together with the explicit coordinate-subset view family . Under that input model, for each coordinate set S the quantity t ( S ) is exactly the finite- dimensional rank of the restricted coordinate map π S | V : the formalization identifies the span-based rank quantity with the finrank of the image of that explicit projection map. (L: AFM10- 11 ) It also proves the basic size bounds t ( S ) ≤ | S | and t ( S ) ≤ dim V , equiv alently dim(im( π S | V )) ≤ min {| S | , dim V } , and shows that t ( S ) = | S | on linearly independent coordinate families while t ( S ) reaches the full ambient determining rank exactly on determining sets. (L: AFM6- 9 , AFM12 ) Hence each t ( S ) is a standard linear-algebraic rank quantity that can be computed by Gaussian elimination on an explicit presentation of V , and the bounds of Corollary IX.3 are then computable in polynomial time in the ambient dimension, the dimension of V , and the number of admissible views. W ithout an explicit linear presentation of V , no algorithmic tractability claim is intended. The utility of this matroid structure is immediate: t ( S ) pro vides an explicit upper bound on the Shannon capacity of the induced confusability graph, bypassing the hardness of the general capacity computation. Computing Shannon capacity for general graphs is hard (the independence number is NP-hard, and e ven the Lo vász- ϑ bound requires semidefinite programming). In contrast, once the af fine family is explicitly presented, the rele v ant rank quantities reduce to standard linear algebra. A. V iew-fiber dimensions and capacity Let A = a 0 + V with V an r -dimensional linear subspace ov er a finite field F q , and let admissible views be coordinate- subset projections. For a coordinate set S write t ( S ) for the rank of the corresponding coordinate functionals on V . Proposition IX.2 (V iew-fiber clique sizes) . Let A and t ( S ) be as above. Each pr ojection-fiber for view S is an affine subspace of dimension r − t ( S ) and size q r − t ( S ) . The single- view confusability graph G S is therefor e a disjoint union of q t ( S ) cliques each of size q r − t ( S ) . In particular α ( G S ) = q t ( S ) , Θ( G S ) = t ( S ) log q , wher e α is the independence number and Θ is Shannon capacity (logarithms in the same base used elsewher e). Pr oof. The projection onto coordinates S is an affine-linear map whose linear part restricted to V has rank t ( S ) . Its kernel is therefore a linear subspace of V of dimension r − t ( S ) , so every fiber is an affine translate of that kernel and has cardinality q r − t ( S ) . The fibers partition A , and within each fiber every pair of states is confusable under S , so G S is a disjoint union of equal-size cliques. The number of fibers equals | A | /q r − t ( S ) = q t ( S ) , so one can select one represen- tativ e per fiber to obtain an independent set of size q t ( S ) , whence α ( G S ) = q t ( S ) . The block-power identity for such clustered graphs gi ves α ( G ⊠ n S ) = q nt ( S ) , and normalization yields Θ( G S ) = t ( S ) log q . ■ Corollary IX.3 (Matroid ranks give capacity bounds) . Let G be the full confusability graph generated by a family of coor dinate-subset views S . Then α ( G ) ≤ min S ∈S q t ( S ) and Θ( G ) ≤ min S ∈S t ( S ) log q . 14 Thus the matr oid rank function t ( · ) on coor dinate sets supplies explicit upper bounds on one-shot and asymptotic exact- r ecovery r ates for the induced confusability gr aph. (L: AFM6- 12 ) Pr oof. An independent set for G must be independent in each single-view graph G S , hence its size is at most α ( G S ) = q t ( S ) for e very S ∈ S . The inequalities follow from taking the minimum over S and applying the block-power argument from the proposition. ■ Remark. The matroid rank t ( S ) is the “informational di- mension” captured by coordinates in S , directly analogous to the rank function in representable matroids studied in combinatorial optimization [ 10 ] and coding theory [ 6 ]. Here t ( S ) = r if f S determines the entire state, and any admissible view containing a basis removes confusability entirely (the induced graph is edgeless). These finite-af fine, coordinate- view statements sho w the matroid is directly applicable to the graph/capacity arc; extensions beyond this specialization are directions for future work. a) Running example: Binary square .: The binary square { 0 , 1 } 2 from Section V -A is an affine space over F 2 with r = 2 . Each single-coordinate view has matroid rank t ( { 1 } ) = t ( { 2 } ) = 1 , giving fiber size 2 2 − 1 = 2 and single-vie w capacity bound Θ( G { i } ) ≤ 1 · log 2 = log 2 . The full confusability graph (the 4-cycle) achieves this bound, so the matroid rank bound is tight in this case. The matroid vie wpoint shows that the 4-cycle structure arises precisely because each view captures only half the informational dimension. X . U N I T - R A T E S T RU C T U R A L I N T E R P R E T A T I O N The main graph-capacity arc studies what happens once ambiguity survives and acquires nontrivial topology . This section returns to the opposite boundary case of the same model: the unit-rate corner in which that ambiguity ne ver appears. It records two direct interpretations of that boundary case: deriv ation realizes the unit-rate regime, and leaving it produces the manual-update gap proved later in Section XI . A. Derivation as the Single-Source Structure An encoding system has unit independent rate when DOF ( C, F ) = 1 . By Proposition III.6 , this is exactly the regime in which ev ery reachable state remains coherent. By Proposition III.7 , any rate above 1 admits reachable disagree- ment states. Thus unit rate is the unique exact-consistency boundary , and the later rate-complexity statements are its operational cost interpretation. (L: COH1- 2 , BND1 , RAT1 , RED1 ) B. Reading the Conver se at Unit Rate When ambiguity classes are tri vial, no auxiliary information is needed and unit rate suffices. When a nontrivial ambigu- ity class surviv es the observation transcript, Theorem IV .3 shows that exact recovery requires a sufficiently large side- information alphabet. If that side information is una v ailable b ut exact correctness is still demanded, additional independently accessible support is required. That is exactly what moves the system above unit rate and leads to the update-cost lo wer bound of Section XI . Deriv ation is therefore the dependence structure that preserv es visible vie ws while remo ving indepen- dent rate. (L: CIA3 , BND2 , DER2 ) C. Rate and Manual Update Cost An encoding may expose many visible copies of a fact, b ut only the independent copies gov ern manual synchronization cost. At unit rate, one manual update changes the authoritative source and deri ved vie ws follo w automatically; abo ve unit rate, each independent location must be synchronized manually . This is the cost interpretation of the threshold. (L: BND1- 2 ) D. Derived V iews Definition X.1 (Deri vation) . Location L deriv ed is derived fr om L source for fact F iff an update to L source determines the v alue of L deriv ed without an additional manual edit. Deriv ation may occur at creation time, build time, commit time, or query time depending on the host system. The abstract point is unchanged: a deri ved location does not contribute an additional independently writable v alue for the same fact. Proposition X.2 (Deri vation Preserves Coherence) . If L derived is derived fr om L sour ce , then L derived cannot diver ge fr om L sour ce and does not contrib ute to DOF . (L: DER2 ) Pr oof. By Definition X.1 , the value at L deriv ed is determined by the value at L source under admissible updates. There is therefore no reachable state in which the two locations carry incompatible values for the same fact. Since the deri ved location has no independently writable contribution, it does not increase DOF . ■ If all encodings of F except one are deriv ed from that one, then Proposition X.2 leav es exactly one independent source, hence DOF ( C , F ) = 1 and exact consistency follows from Proposition III.6 . (L: DER2 , COH1 ) The same dependence structure can be realized in many host systems. Its role here is only to prepare the realizability criterion. X I . T H R E S H O L D A N D R A T E - C O M P L E X I T Y C O RO L L A R I E S The main graph-capacity arc characterizes the structure of exact failure under partial views. This section records one downstream operational consequence at the opposite bound- ary: independent rate governs manual synchronization cost in the same finite deterministic model. A. Cost Model Definition XI.1 (Modification Cost Model) . Let δ F be a modification to fact F in encoding system C . The effective modification complexity M effective ( C, δ F ) is the number of locations that must be edited manually to preserve correctness after applying δ F . Only manual edits are counted. Derived locations updated automatically by the host system contribute zero effecti ve cost. 15 B. Upper Bound at Unit Independent Rate Theorem XI.2 (Rate-1 Upper Bound) . If DOF ( C, F ) = 1 , then M effective ( C, δ F ) = O (1) . (L: BND1 ) Pr oof. When DOF ( C, F ) = 1 , there is exactly one indepen- dently writable source location for F . Updating that location is one manual edit; e very other representation of F is deriv ed and is updated automatically . The number of manual edits therefore stays bounded by a constant independent of the number of visible encodings. ■ C. Lower Bound Above the Thr eshold Theorem XI.3 (Abo ve-Threshold Lo wer Bound) . If fact F is encoded at n independent locations, then M effective ( C, δ F ) = Ω( n ) . (L: BND2 ) Pr oof. Each independent location can retain an outdated v alue unless it is edited directly . After a source update, ev ery one of the n independent locations can therefore witness a distinct stale copy of the same fact. Any exact repair strategy must bring each such location back into agreement, and in the w orst case no single manual action can repair two independently writable locations at once. Hence the effecti ve modification complexity grows at least linearly in n . ■ Lemma XI.4 (Information-Constrained DOF Lower Bound) . If the available side-information mechanism exposes at most 2 L distinguishable tags while the latent ambiguity class has size K > 2 L , then no zer o-error resolver can identify the true state fr om those tags alone. Any ar chitectur e that still guarantees exact correctness must ther efore rely on additional independently accessible discriminating support, moving the system above unit rate. (L: CIA4 ) Pr oof. Theorem IV .3 rules out exact recovery from the a vail- able tag budget. If e xact correctness is nev ertheless re- quired, the architecture must supplement the insuf ficient side- information channel with additional independently accessible information. In the present model, that means lea ving the rate- 1 re gime and paying the synchronization cost of Theo- rem XI.3 . ■ D. The Unbounded Gap Theorem XI.5 (Unbounded Gap) . The ratio between the mod- ification cost of an ar chitectur e with n independent encodings and the modification cost of a rate- 1 ar chitectur e diver ges: lim n →∞ M DOF > 1 ( n ) M DOF=1 = ∞ . (L: BND3 ) Pr oof. By Theorem XI.2 , a rate- 1 architecture has constant effecti ve modification complexity . By Theorem XI.3 , an ar- chitecture with n independent encodings incurs linear cost in n . The ratio therefore gro ws without bound. ■ Once a fact is duplicated across many independently writable locations, the cost of preserving exact consistency scales with the number of independent copies, not with the visibility of the fact. X I I . O P E R AT I O NA L R E A L I Z A B I L I T Y C R I T E R I O N The main graph-capacity arc characterizes what happens once ambiguity surviv es and must be resolved structurally . This section turns to the opposite boundary case of the same model: when can a concrete host system realize the unit-rate corner isolated earlier, and what two structural properties are needed for that purpose? In integrity language, the question is when structural inte grity can be both enforced and certified by the host itself. The resulting criterion is architectural rather than a ne w graph-capacity theorem. A host system realizes the rate- 1 regime when one location is authoritativ e and ev ery secondary representation is main- tained as a deterministic vie w . T wo capabilities are required: 1) Causal update propagation : updates to the authori- tativ e source must automatically propagate to derived locations. 2) Prov enance observability : the system must expose enough structural information to verify which locations are authoritati ve and which are derived. Definition XII.1 (V erifiable Structural Integrity) . An encoding system has verifiable structural inte grity for structural facts if it both realizes structural integrity in reachable states and exposes enough host-native information to certify that this integrity-preserving dependence structure holds. A. Confusability and Side Information T o connect realizability to information av ailable at verifica- tion time, fix a fact F and consider the alternati ves that remain compatible with the av ailable observ ations. These alternati ves form the same zero-error obstruction analyzed in Sections IV – VIII : if the host cannot separate the surviving ambiguity class, exact verification requires additional structural side informa- tion. Lemma XII.2 (Confusability Clique Bound) . If the con- fusability graph for F contains a clique of size k , then any zer o-err or side-information scheme distinguishing those k alternatives requir es at least k distinct tags, equivalently at least log 2 k bits of side information. (L: OBS5 ) Pr oof. Within a k -clique, every pair of alternativ es is confus- able under the base observation. A zero-error tag assignment must therefore separate all k states. Distinct states cannot share a tag, so at least k tags are required. ■ If a host system does not expose enough structural informa- tion to separate the remaining alternatives, exact verification 16 of the rate- 1 regime is impossible. In security-adjacent terms, the host cannot certify that an apparently coherent state is genuinely authoritative rather than an undetected stale coinci- dence. B. The Structural T iming Constraint Definition XII.3 (Structural Fact) . A fact F is structural if ev ery location encoding F is fixed when the underlying object, declaration, or artifact is created. After that moment, the encoding can be read and compared, but not retroacti vely re-created without a ne w edit e vent. Structural facts are the canonical exact-consistency setting because they expose a clear creation moment. Examples include schema declarations, registry membership rules, de- pendency metadata, and interface-lev el constraints. Theorem XII.4 (T iming Constraint for Structural Deri vation) . F or structural facts, derivation must occur no later than the moment at whic h the r elevant encoding locations become fixed. (L: REQ1 , TRI1 ) Pr oof. Let t fix denote the time at which the structural encoding becomes fixed. A deriv ation performed strictly before t fix cannot depend on the completed source value; a deriv ation performed strictly after t fix cannot alter the already fixed structural representation. Therefore structural deri vation must occur at t fix or as part of the same creation/update ev ent. ■ C. Requirement 1: Causal Update Pr opagation Definition XII.5 (Causal Update Propagation) . An encoding system has causal update propa gation if e very update to a source location is followed by updates to all locations deriv ed from that source with no reachable intermediate state in which source and deriv ed locations disagree. Equiv alently , propagation is part of the same admissible update ev ent for the purposes of reachable-state analysis, rather than an asynchronous repair step with observ able stale states. This is the host-system manifestation of deterministic de- pendence. W ithout it, a temporal gap appears between source modification and deri ved-vie w repair . Theorem XII.6 (Causal Propagation is Necessary for Unit Rate) . Achie ving independent rate 1 for structural facts r e- quir es causal update pr opagation. (L: REQ2 ) Pr oof. By Theorem XII.4 , structural deriv ation must occur at the moment the source-side structural encoding is fixed. If causal propagation is absent, then some derived location remains stale until a later manual action. During that interval the source and deriv ed locations disagree, so the architecture does not realize exact consistency . Hence a verifiable rate- 1 realization for structural f acts requires causal propagation. ■ D. Requirement 2: Pro venance Observability Definition XII.7 (Provenance Observability) . An encoding system has pr ovenance observability if it supports queries that rev eal which locations encode a fact, which of those locations are authoritati ve, and which are derived. Prov enance observability is the structural side information needed to certify that the system is genuinely in the single- source regime rather than merely appearing coherent on a small sample of states. Theorem XII.8 (Prov enance Observability is Necessary for V erifiable Unit Rate) . V erifying that independent r ate 1 holds r equir es pr ovenance observability . (L: REQ3 ) Pr oof. T o verify independent rate 1 , one must enumerate the locations encoding the fact, identify the authoritativ e source, and confirm that ev ery remaining location is deriv ed from it. W ithout provenance queries, these checks cannot be completed from inside the host system. The architecture may be coherent on observ ed states, b ut the single-source claim is not v erifiable. ■ E. Independence of the T wo Requir ements The two requirements are logically separate. Theorem XII.9 (Requirements are Independent) . 1) An encoding system can have causal pr opagation without pr ovenance observability . 2) An encoding system can have pr ovenance observability without causal pr opagation. (L: IND1- 2 ) Pr oof. For (1), consider a system that automatically refreshes all deriv ed views after each source update but exposes no query interface for its deri vation graph. Coherence may hold operationally , but the single-source claim is not inspectable. For (2), consider a system that records complete provenance metadata yet requires users to trigger propagation manually after each change. The deriv ation structure is visible, but stale views remain reachable. Hence neither property implies the other . ■ F . The Realizability Theor em Corollary XII.10 (Operational Realizability Criterion) . An encoding system can achieve verifiable independent rate 1 , equivalently verifiable structur al inte grity , for structur al facts if and only if it pr ovides both causal update pr opagation and pr ovenance observability . (L: SOT1 ) Pr oof. Necessity follows from Theorems XII.6 and XII.8 . For sufficienc y , assume both properties hold. Causal propagation ensures that every deriv ed location is updated as part of the same structural ev ent as the source; provenance observ abil- ity then certifies that all secondary encodings are in fact deriv ed from that source. The architecture therefore realizes 17 a verifiable single-source encoding, i.e., independent rate 1 , which is exactly verifiable structural integrity in the sense of Definition XII.1 . ■ Applied to representativ e hosts, this criterion separates systems in which propagation and prov enance are both host- nativ e from systems missing one or both capabilities. Ap- pendix A collects those representati ve readings in one table, and Supplement A contains the companion case-study traces and measurement notes. X I I I . R E L AT E D W O R K A. Zero-Err or and Side-Information Lineage Shannon’ s zero-error framework and its graph-theoretic refinements by Körner and Lovász provide the closest con- ceptual starting point [ 1 ]–[ 3 ]. W itsenhausen’ s zero-error side- information problem is directly relev ant because it studies exact recov ery under side information through graph coloring and label budgets [ 4 ]. Our one-shot coloring statements lie in that lineage. As in W itsenhausen’ s formulation, an observation law induces the relev ant confusability graph. The difference is that here the observation law is generated by a deterministic multi-fact tuple-space architecture with admissible coordinate- subset views, so the model itself constrains what graphs can arise. a) Ke y distinction fr om W itsenhausen.: Both settings can be vie wed as starting from an observ ation la w and passing to the induced confusability graph, so there is no sharp “induced” versus “gi ven” di vide. The dif ference is architectural and struc- tural. W itsenhausen’ s setting starts from a side-information law and studies the coding problem for the graph it induces. Our setting starts from a deterministic partial-vie w architecture on latent tuples, with observ ations obtained by admissible coordinate-subset projections, and asks what graph structure that architecture can generate and what recov ery laws follow from it. This makes two model-side issues central: how the admissible view family constrains the induced graph class, and how changing the view topology changes the resulting failure graph and recov ery budget. In the exact full-tuple-space coordinate-view model analyzed here, that graph class can no w be characterized exactly: the realizable confusability relations are precisely those determined by upward-closed families of coordinate-agreement sets. The contribution is therefore not a new coloring theorem for arbitrary graphs, but an architectural realization theory connecting multi-fact tuple-space vie ws, a fully characterized model-generated graph class, and exact zero-error recov ery . Slepian–W olf coding and classical entropy con verses supply the second line of influence [ 5 ], [ 6 ], [ 8 ], [ 9 ]. Coding-for-computing, functional compression, and graph- entropy/characteristic-graph viewpoints are also close neigh- bors because the y study deterministic decoder targets and cod- ing questions once an underlying graph or function structure has been specified [ 2 ], [ 11 ], [ 12 ]. Characteristic-graph vs. coordinate-projection models. Characteristic-graph and functional-compression formalisms handle arbitrary deterministic functions of the source and therefore subsume a broader class of recovery tasks than the present model. Our model intentionally restricts attention to coordinate-projection observ ations on labeled tuple spaces rather than arbitrary functions. That restriction is the source of our structural lev erage: it allo ws a reduction of confusabil- ity to agreement-set membership and thereby enables proofs that (a) realizable confusability relations are exactly those determined by upward-closed agreement families, (b) coordi- natewise alphabet permutations act by graph automorphisms, and (c) the induced graph class is monotone and closed under the block-composition (strong-product) operation. These specific structural consequences do not follo w from the general characteristic-graph viewpoint and are the reasons the paper can give deductive, architecture-first results rather than only applying general functional-compression theorems. In our setting the side information is deterministic rather than stochastic, but it plays the same operational role: exact recov ery becomes impossible when the a vailable observation/- tag alphabet is too small to separate the remaining alternativ es. After a confusability graph is fixed, the colorability , strong- power , Shannon-capacity , and Lovász- ϑ machinery used later is classical. In particular , capacity– ϑ equality for cluster graphs is classical; the nov elty here is identifying transiti vity of view-generated confusability as the model-side condition that produces this equality subclass, together with meet-witnessing and fiber coherence as structural sufficient conditions. The paper therefore does not claim a new theorem about arbitrary graphs, but a deterministic partial-view model that generates a fully characterized monotone agreement-set graph class on the full labeled tuple space, an exact recovery/success- set/weighted-success theory for that model-generated class, and a structural equality route inside it. What it does not attempt is the broader unlabeled-isomorphism classification, or the corresponding classifications for restricted realized-state families and stochastic support-overlap models. B. Consistency-Constrained Storag e and Interaction Consistency-constrained storage problems, including multi- version coding and related distrib uted-storage conv erses, show that correctness requirements impose unavoidable information costs [ 13 ]. Here the object under study is a modifiable encod- ing architecture, and the main question is the e xact-consistency cost of multiple independently writable representations of one latent fact. C. Computational Realizations Reflection, metadata systems, and maintenance mechanisms in host platforms provide examples of how the realizability conditions can be instantiated in practice [ 14 ], [ 15 ]. These examples are auxiliary to the zero-error theory rather than part of its main no velty claim. D. F ormal V erification The Lean 4 formalization places the work in the tradition of mechanized mathematics and verified theory development [ 7 ], [ 16 ]. 18 X I V . C O N C L U S I O N The paper’ s main contribution is a structural theory of exact failure under deterministic partial views. Admissible view families induce confusability graphs on latent states; those graphs record which latent alternativ es the architecture cannot separate, and the y gov ern the exact recovery law . In the exact full-tuple-space coordinate-view model, this graph class is characterized exactly by upward-closed agreement- set families rather than treated as an arbitrary graph input. In the multi-fact partial-observ ation extension, exact reco very becomes colorability , exactness on a success set becomes induced-subgraph colorability , and the exact finite weighted- success value is determined by the largest colorable induced subgraph. (L: MFT9 , MFT18- 24 , MFT20 , MFT125- 131 ) Repeated composition preserves that failure structure rather than erasing it: the block-rate sequence is supermultiplica- tiv e, its normalized rates con ver ge by Fekete’ s lemma to a real asymptotic Shannon capacity equal to the supremum of the finite block rates, and the same capacity is bounded abov e by complement-chromatic and fixed Lovász- ϑ upper objects. The one-shot upper in variant is matched with standard orthonormal-representation, primal-PSD, and dual-theta forms. (L: MFT29- 30 , MFT43- 84 ) The upper theory is sharp on a structurally characterized subclass. F or the original clique-fiber subclass, asymptotic Shannon capacity , the fixed Lovász- ϑ upper, and the loga- rithm of the number of fibers coincide. The same equality exports back to the original view-f amily model through a broader structural route: transitivity of confusability yields the component-cluster collapse, meet-witnessing is a sufficient condition for that transitivity , and fiber coherence is a stronger sufficient condition under which the connected components are exactly the realized transcript fibers. (L: MFT85- 91 , MFT100- 102 , MFT136- 137 ) The same latent-state framew ork also yields a fact-side affine specialization. The observ ation-side question asks which latent states are distinguishable from partial views; the fact- side question asks which coordinates determine others across the realized-state f amily . When the valid states form an af fine family , the latter question becomes span membership of co- ordinate functionals, so the fact indices carry a representable matroid and the minimal determining fact sets are exactly the bases. In this affine regime, the same model carries a graph in variant on states and a matroid inv ariant on coordinates, and the latter supplies tractable upper bounds for the former . More sharply , the rele vant affine rank quantity is exactly the finrank of the image of the restricted coordinate projection, which closes the representation-level link between the matroid language and explicit linear algebra. (L: AFM1- 14 ) The deterministic finite conv erse remains the foundation of this theory . Once the observation transcript lea ves a nontrivial ambiguity class, the same obstruction can be stated as con- fusability , counting, conditional-entropy , decoder-output, and finite-gap constraints, together with deterministic data pro- cessing and a budgeted finite-error extension. The paper does not claim a new general theorem about arbitrary confusability graphs; rather , it sho ws that a partial-view encoding model can induce genuinely non-clique failure geometry and reach the classical graph-capacity machinery there. The same model also has a clean boundary theory: rate 1 is exactly the structural- integrity regime, higher rate makes integrity violations reach- able, and realizability plus prov enance characterize when that integrity can be certified by the host. As downstream consequences of that foundation, unit rate is the unique zero-incoherence regime, any higher independent rate makes ambiguity reachable, and the same obstruction yields an O (1) versus Ω( n ) manual-update gap together with a realizability criterion based on causal propagation and pro ve- nance observability . Appendix A records representativ e host- lev el readings of that boundary-case criterion, and Supple- ment A adds before/after traces and case-study details showing how the same structural split appears in concrete systems. This boundary-case classification also has an integrity read- ing. In P attern A and Pattern B architectures, the host cannot reliably distinguish a valid deriv ed state from an undetected stale state by its own structural interface. The resulting inco- herence is an integrity exposure that leav es stale but plausible states reachable and, in adversarial settings, creates latent attack surface inside the architecture itself. Computational representation. The graph-theoretic for- mulation is structurally exponential in the number of facts: for d facts ov er an alphabet of size q , the latent state space has cardinality q d , so any explicit confusability graph materi- alization necessarily starts from an exponentially large vertex set. (L: MFT113 ) A naiv e explicit construction ranges over at most q 2 d ordered state pairs, formalized in the artifact through the corresponding product-cardinality and pair-count bounds. (L: MFT116- 117 ) At the same time, the graph need not be materialized: adjacency is already packaged by an implicit oracle on state pairs, formalized as a Boolean confusability test equiv alent to the confusability relation itself, and also by an equiv a- lent agreement-set oracle whose direct scan cost is bounded linearly in the total view size. (L: MFT114- 115 , MFT132- 135 ) Under the explicit coordinate-view representation used throughout the paper , that one-shot oracle is therefore polynomial in its direct input size, computable by scanning the coordinates and the admissible view family in O ( d + P ℓ | V ℓ | ) time, hence O ( Ld ) . In the affine specialization, the upper bounds are like wise polynomial-time computable once the direction space is presented explicitly by a basis or generator matrix, because the formalized rank quantity is exactly the finrank of the image of the restricted coordinate projection, so each bound reduces to Gaussian elimination on that explicit map. (L: AFM10- 11 ) The exponential barrier therefore lies in full graph materialization and global capacity computation, not in the local adjacency test or in the af fine upper-bound surrogate. Limitations and futur e work. The present paper develops only the finite deterministic architectural arc. For the exact tuple-space coordinate-view model, the full labeled-tuple- space graph class is now characterized exactly by monotone agreement-set families; what remains open is the correspond- ing classification up to unlabeled graph isomorphism, the restricted-state and stochastic-support variants, and the sharp- ness of the upper theory on those broader subclasses. 19 A. Artifacts The Lean 4 formalization is included as supplementary material. Appendix A records representative realizability read- ings, and Supplement A contains the companion case-study material. Acknowledgment: AI-use Disclosure Generativ e AI tools (including Codex, Claude Code, Aug- ment, Kilo, and OpenCode) were used throughout this manuscript, across all sections (Abstract, Introduction, theo- retical development, proof sketches, applications, conclusion, and appendix) and across all stages from initial drafting to final re vision. The tools were used for boilerplate generation, prose and notation refinement, L A T E X/structure cleanup, trans- lation of informal proof ideas into candidate formal artifacts (Lean/L A T E X), and repeated adversarial revie wer-style critique passes to identify blind spots and clarity gaps. The author retained full intellectual and editorial control, including problem selection, theorem statements, assumptions, nov elty framing, acceptance criteria, and final inclusion/ex- clusion decisions. No technical claim was accepted solely from AI output. Formal claims reported as machine-verified were admitted only after Lean verification (no sorry in cited modules) and direct author revie w; Lean was used as an inte grity gate for responsible AI-assisted research. The author is solely responsible for all statements, citations, and conclusions. R E F E R E N C E S [1] C. E. Shannon, “Zero-error capacity of a noisy channel, ” IRE T ransac- tions on Information Theory , vol. 2, no. 3, pp. 8–19, 1956. [2] J. Körner , “Coding of an information source having ambiguous alphabet and the entrop y of graphs, ” T ransactions of the 6th Prague Confer ence on Information Theory , pp. 411–425, 1973. [3] L. Lovász, “On the shannon capacity of a graph, ” IEEE Tr ansactions on Information Theory , vol. 25, no. 1, pp. 1–7, 1979. [4] H. S. W itsenhausen, “The zero-error side information problem and chromatic numbers, ” IEEE T ransactions on Information Theory , vol. 22, no. 5, pp. 592–593, 1976. [5] D. Slepian and J. K. W olf, “Noiseless coding of correlated information sources, ” IEEE T ransactions on Information Theory , vol. 19, no. 4, pp. 471–480, 1973. [6] T . M. Cover and J. A. Thomas, Elements of Information Theory , 2nd ed. W iley-Interscience, 2006. [7] L. de Moura and S. Ullrich, “The Lean 4 theorem pro ver and program- ming language, ” in Automated Deduction – CADE 28 . Springer, 2021, pp. 625–635. [8] R. M. Fano, Tr ansmission of Information: A Statistical Theory of Communications . Cambridge, MA: MIT Press, 1961. [9] H. S. Witsenhausen and A. D. W yner , “ A conditional entropy bound for a pair of discrete random variables, ” IEEE T ransactions on Information Theory , vol. 21, no. 5, pp. 493–501, 1975. [10] J. Oxley , Matr oid Theory , 2nd ed. Oxford Uni versity Press, 2011. [11] A. Orlitsky and J. R. Roche, “Coding for computing, ” IEEE T ransactions on Information Theory , vol. 47, no. 3, pp. 903–917, 2001. [12] V . Doshi, D. Shah, M. Médard, and M. Effros, “Functional compression through graph coloring, ” IEEE T ransactions on Information Theory , vol. 56, no. 8, pp. 3901–3917, 2010. [13] K. V . Rashmi, N. B. Shah, K. Ramchandran, and D. Gu, “Multi-version coding—an information-theoretic perspectiv e of consistent distributed storage, ” IEEE T ransactions on Information Theory , v ol. 63, no. 6, pp. 4111–4128, 2017. [14] G. Kiczales, J. des Rivières, and D. G. Bobrow , The Art of the Metaobject Protocol . MIT Press, 1991. [15] B. C. Smith, “Reflection and semantics in lisp, ” in Pr oceedings of the 11th ACM SIGACT -SIGPLAN symposium on Principles of pro gramming languages , 1984, pp. 23–35. [16] X. Leroy , “Formal verification of a realistic compiler , ” Communications of the A CM , vol. 52, no. 7, pp. 107–115, 2009. [17] M. T eich, R. Hettinger, and Y . Selivano v , “PEP 487 – simpler customi- sation of class creation, ” Python Enhancement Proposals, 2016, online: peps.python.org/pep-0487/. [18] Python Software Foundation, “Python data model, ” Python 3 Documentation, 2025, [Online]. A vailable: docs.python.org/3/reference/datamodel.html. [19] The PostgreSQL Global Dev elopment Group, “Materialized views, ” PostgreSQL documentation, 2025, [Online]. A vailable: www .postgresql.org/docs/current/rules-materializedviews.html. [20] ——, “pg_matviews, ” PostgreSQL system catalog documentation, 2025, [Online]. A vailable: www .postgresql.org/docs/current/view-pg- matviews.html. [21] The Rust T eam, “The Rust reference, ” Language reference, 2024, online: doc.rust-lang.org/reference/. [22] Microsoft, “Decorators, ” T ypeScript Handbook, 2025, [Online]. A vail- able: www .typescriptlang.org/docs/handbook/decorators.html. [23] MDN W eb Docs, “Classes, ” Ja vaScript reference, 2025, [Online]. A vailable: de veloper .mozilla.org/en- US/docs/W eb/Jav aScript/Reference/Classes. [24] J. Gosling, B. Joy , G. Steele, G. Bracha, A. Buckley , D. Smith, and G. Bierman, The Java Language Specification, J ava SE 17 Edition . Oracle America, Inc., 2021, online: docs.oracle.com/ja vase/specs/. [25] The Go Authors, “The Go programming language specification, ” Lan- guage specification, 2024, online: go.dev/ref/spec. [26] npm, Inc., “package-lock.json, ” npm CLI documentation, 2026, [Online]. A vailable: docs.npmjs.com/cli/v8/configuring-npm/package-lock-json/. [27] ——, “npm explain, ” npm CLI documentation, 2026, [Online]. A vail- able: docs.npmjs.com/cli/v7/commands/npm-explain/. [28] ——, “npm ls, ” npm CLI documentation, 2026, [Online]. A vailable: docs.npmjs.com/cli/v7/commands/npm-ls/. [29] The Rust Project Developers, “Car go.toml vs cargo.lock, ” The Cargo Book, 2026, [Online]. A vailable: doc.rust-lang.org/cargo/guide/car go- toml-vs-cargo-lock.html. [30] ——, “cargo-tree, ” The Cargo Book, 2026, [Online]. A vailable: doc.rust- lang.org/car go/commands/cargo-tree.html. [31] ——, “cargo-metadata, ” The Cargo Book, 2026, [Online]. A vailable: doc.rust-lang.org/car go/commands/cargo-metadata.html. [32] Poetry Contributors, “CLI, ” Poetry documentation, 2026, [Online]. A vailable: python-poetry .org/docs/cli/. [33] pnpm Contributors, “pnpm install, ” pnpm documentation, 2026, [On- line]. A vailable: pnpm.io/cli/install. [34] ——, “pnpm why , ” pnpm documentation, 2026, [Online]. A vailable: pnpm.io/cli/why . [35] ——, “pnpm list, ” pnpm documentation, 2026, [Online]. A vailable: pnpm.io/cli/list. A P P E N D I X This appendix provides a consolidated classification of representativ e systems according to the architectural patterns identified by Corollary XII.10 . The pattern labels used in the table are: Pattern A (Blind Update), meaning missing causal propagation; Pattern B (Amnesiac Deriv ation), meaning missing provenance observability; and Pattern C (Coherent Kernel), meaning both conditions are present and verifiable rate 1 is realizable. 20 Host family Host / Pattern Class Prop. Prov . Rate 1 Mechanism Prog. languages Python [ 17 ], [ 18 ] C yes yes yes For the structural facts considered here, class-definition hooks and run- time hierarchy introspection are host- nativ e. Databases Engine- maintained mate- rialized view [ 19 ], [ 20 ] C yes yes yes The engine maintains the deri ved re- lation and exposes catalog metadata for it. Prog. languages Rust [ 21 ] B partial no no Compile-time macro generation ex- ists, but the compiled runtime artifact does not retain derivation prov enance. Prog. languages T ypeScript / Jav aScript [ 22 ], [ 23 ] B partial no no Decorators can attach metadata, but type erasure and missing subclass provenance block exact verification in the host runtime. Prog. languages Jav a [ 24 ] A/B no no no Under the runtime-only host inter- pretation used here, annotations and related deriv ation tooling are external to the JVM runtime. Prog. languages Go [ 25 ] A/B no no no Reflection exists, but there are no definition-time hooks or host-native implementer registries. Databases External ETL copy / duplicate summary table A/B no no no Synchronization is externalized, and provenance is no longer intrinsic to the host database. Deps. npm [ 26 ]– [ 28 ] A no yes no Lockfile and query commands expose provenance, b ut manifest edits require explicit npm operations before the de- riv ed state is updated. Deps. Cargo [ 29 ]– [ 31 ] A no yes no Resolved-dependency provenance is exposed, but Cargo.lock remains a distinct derived artifact synchro- nized through Cargo commands. Deps. Poetry [ 32 ] A no yes no The resolved graph is queryable, b ut lock regeneration is an explicit step rather than part of the source edit itself. Deps. pnpm [ 33 ]– [ 35 ] A no yes no Provenance is queryable, but the lock- file remains separately maintained rather than automatically updated with source edits. T able A.1. Classification of representative systems. The same theorem-level coordinates apply uniformly across programming-language runtimes, databases, and dependency managers, so the systems analysis is a single v alidation table rather than a domain-by-domain survey . C L A I M M A P P I N G T O L E A N H A N D L E S Paper handle Status Lean support cor: integrity- threshold Full COH2 , COH1 cor:matroid- capacity- bounds Full FM3 , FM6 , FM7 , FM8 , FM9 , FM10 , FM11 lem:confusability- clique Full OM2 lem:info- dof Full CC3 thm: affine- fact- matroid Full FM1 , FM2 , FM4 , FM5 , FM12 thm:asymptotic- capacity Full MF7 , MF13 , MF14 , MF15 , MF16 , MF17 , MF22 , MF23 thm: causal- necessary Full REQ2 thm: coherence- capacity Full CC4 , CAP3 , CAP2 lem:confusability- con verse Full OM2 lem: deriv ation- excludes Full CC1 lem:dof- gt- one- incoherence Full COH2 lem: dof- one- coherence Full COH1 lem:equiv alence- viewpoint Full CC2 , OM1 , OM2 , OM5 , OM6 , OM7 , OM8 , OM10 , OM11 lem:fano- con verse Full CC2 lem:finite- error- budgeted Full OM3 , OM4 lem:independence Full IND1 , IND2 lem:lower - bound Full BND2 lem:multifact- colorability Full MF8 , MF9 , MF12 lem:multifact- max- success Full MF2 , MF10 lem:multifact- nonclique Full MF1 , MF3 , MF4 , MF5 , MF6 lem: multifact- product Full MF18 , MF19 , MF20 , MF21 lem:multifact- success- set Full MF11 lem:pair- injecti ve Full OM8 , OM9 lem:provenance- necessary Full REQ3 lem:side- info Full CC5 lem:ssot- iff Full L1 lem:timing- forces Full REQ1 , TRI1 lem: unbounded- gap Full BND3 lem:upper- bound Full BND1 Auto summary: mapped 29/29 (full=29, derived=0, unmapped=0). M E C H A N I Z E D V E R I FI C A T I O N ( L E A N 4 ) The main theorem chains are machine-checked in Lean 4, and the cited proof modules compile with zero sorry placeholders. The paper keeps some proofs abbreviated for readability , but full formal proofs of the cited results appear in the supplementary Lean artifact. For the asymptotic arguments, the artifact formalizes the relev ant block power- rate sequences directly and proves the required supermultiplicative or subadditive inequalities on those sequences before inv oking Fekete-style con vergence lemmas. The asymptotic capacity and asymptotic upper values are therefore obtained inside the formal dev elopment from e xplicit sequence objects rather than only quoted as external limits. The cited asymptotic theorems use classical reasoning in the same way as the corresponding finite graph and entropy arguments. The artifact covers the theorem packages used in the paper: • the threshold and unified-conv erse package: coherence, independent rate, the side-information lower bound, pair injectivity , clique/fiber/global budget bounds, conditional-entropy and decoder -output reformulations, deterministic data process- ing, and the b udgeted finite-error extension; • the graph and asymptotic-capacity package: an explicit confusability graph object, exact recovery as colorability , exact success-set cardinality and weighted success mass via lar gest colorable induced subgraphs, strong-po wer identification for block confusability , supermultiplicativ e block growth, Fekete-style asymptotic Shannon capacity , complement-chromatic upper bounds, and a fixed Lovász- ϑ upper theory with standard orthonormal, primal-PSD, and dual forms; • the equality package: the clique-fiber equality theorem, export back to the original view-family model through transitivity of confusability , the meet-witness sufficient condition, and the stronger fiber-coherence collapse criterion; • the affine fact-matroid package: semantic determination as span membership of coordinate functionals, the representable matroid on fact indices, and the identification of minimal determining fact sets with the matroid bases; 21 • the downstream consequence package: derivation, the timing constraint, causal propagation, provenance observability , the realizability iff theorem, and the O (1) / Ω( n ) rate-complexity gap. The supplementary artifact contains the theorem-to-declaration co verage matrix, proof in ventory , and build instructions. Lean Handle Index. AFM1 FactMatroid.determinesFact_iff_mem_f actSpan Ssot/FactMatroid.lean AFM2 FactMatroid.factMatroid_indep_iff Ssot/FactMatroid.lean AFM3 FactMatroid.basisFacts_card_eq_finrank Ssot/FactMatroid.lean AFM4 FactMatroid.basisFacts_minimal Ssot/FactMatroid.lean AFM5 FactMatroid.minimalDetermining_basis Ssot/FactMatroid.lean AFM6 FactMatroid.factRankFinset_le_card Ssot/FactMatroid.lean AFM7 FactMatroid.coordProjection_range_le_card Ssot/FactMatroid.lean AFM8 FactMatroid.factRankFinset_eq_card_of_indepFacts Ssot/FactMatroid.lean AFM9 FactMatroid.factRankFinset_eq_total_of_determinesAllFinset Ssot/FactMatroid.lean AFM10 FactMatroid.factSpanFinset_eq_range_coordProjection_dualMap Ssot/FactMatroid.lean AFM11 FactMatroid.factRankFinset_eq_finrank_range_coordProjection Ssot/FactMatroid.lean AFM12 FactMatroid.factRankFinset_le_finrank_directionSpace Ssot/FactMatroid.lean AFM13 FactMatroid.coordProjection_range_eq_card_of_indepFacts Ssot/FactMatroid.lean AFM14 FactMatroid.coordProjection_range_eq_total_of_determinesAllFinset Ssot/FactMatroid.lean BND1 ssot_upper_bound Ssot/Bounds.lean BND2 non_ssot_lower_bound Ssot/Bounds.lean BND3 ssot_advantage_unbounded Ssot/Bounds.lean CAP2 ssot_guarantees_coherence Ssot/Coherence.lean CAP3 non_ssot_permits_incoherence Ssot/Coherence.lean CC1 CC.derivation_preserv es_coherence_core CC2 CC.finite_counting_converse CC3 CC.info_dof_counting_contradiction CC4 CC.rate_incoherence_step CC5 CC.side_information_requirement CIA1 ClaimClosure.side_information_requirement Ssot/ClaimClosure.lean CIA3 ClaimClosure.finite_counting_converse Ssot/ClaimClosure.lean CIA4 ClaimClosure.info_dof_counting_contradiction Ssot/ClaimClosure.lean COH1 dof_one_implies_coherent Ssot/Coherence.lean COH2 dof_gt_one_incoherence_possible Ssot/Coherence.lean DER2 ClaimClosure.derivation_preserv es_coherence_core Ssot/ClaimClosure.lean FM1 FM.basisFacts_card_eq_finrank FM2 FM.basisFacts_minimal FM3 FM.coordProjection_range_le_card FM4 FM.determinesFact_if_mem_factSpan FM5 FM.factMatroid_indep_if FM6 FM.factRankFinset_eq_card_of_indepFacts FM7 FM.factRankFinset_eq_finrank_range_coordProjection FM8 FM.factRankFinset_eq_total_of_determinesAllFinset FM9 FM.factRankFinset_le_card FM10 FM.factRankFinset_le_finrank_directionSpace FM11 FM.factSpanFinset_eq_range_coordProjection_dualMap FM12 FM.minimalDetermining_basis IND1 both_requirements_independent Ssot/Requirements.lean IND2 both_requirements_independent’ Ssot/Requirements.lean L1 matroid_basis_equicardinality paper1/axis_framework.lean L2 ssot_if MF1 MF .binaryV iews_colorable_two MF2 MF .binaryV iews_maxColorableCard_one_eq_two MF3 MF .binaryV iews_nonclique_witness MF4 MF .binaryV iews_not_colorable_one MF5 MF .binaryV iews_oneT ag_exactOn_card_le_two MF6 MF .binaryV iews_oneT ag_uniform_successProb_le_half MF7 MF .blockRate_le_shannonCapacityReal MF8 MF .colorable_if_graphColorable MF9 MF .exactRecovery_if_properColoring MF10 MF .exists_exactOn_card_eq_maxColorableCard MF11 MF .exists_exactOn_if_colorableOn MF12 MF .exists_exactRecov ery_if_colorable MF13 MF .graphNegLogSeq_subadditiv e MF14 MF .graphPowerRate_le_graphShannonCapacityReal MF15 MF .graphShannonCapacityReal_nonneg MF16 MF .graphShannonLowerCapacityReal_eq_ofReal_graphShannonCapacityReal MF17 MF .iSup_graphPowerRate_eq_graphShannonCapacityReal MF18 MF .pairColorable_of_colorable MF19 MF .pairProperColoring_of_componentColorings MF20 MF .pair_product_clique_budget_lower_bound MF21 MF .pair_product_success_card_lower_bound MF22 MF .shannonCapacityReal_eq_iSup_blockRate MF23 MF .tendsto_graphPowerRateNat_graphShannonCapacityReal MFT3 MultiFact.exactRecovery_if f_properColoring Ssot/MultiFact.lean MFT4 MultiFact.exists_exactRecov ery_iff_colorable Ssot/MultiFact.lean MFT5 MultiFact.binaryV iews_nonclique_witness Ssot/MultiFact.lean MFT6 MultiFact.binaryV iews_colorable_two Ssot/MultiFact.lean MFT7 MultiFact.binaryV iews_not_colorable_one Ssot/MultiFact.lean MFT8 MultiFact.binaryV iews_oneT ag_uniform_successProb_le_half Ssot/MultiFact.lean MFT9 MultiFact.exists_exactOn_if f_colorableOn Ssot/MultiFact.lean MFT10 MultiFact.binaryV iews_oneT ag_exactOn_card_le_two Ssot/MultiFact.lean MFT11 MultiFact.pairProperColoring_of_componentColorings Ssot/MultiFact.lean MFT12 MultiFact.pairColorable_of_colorable Ssot/MultiFact.lean MFT13 MultiFact.pair_product_clique_budget_lower_bound Ssot/MultiFact.lean MFT14 MultiFact.exists_exactOn_card_eq_maxColorableCard Ssot/MultiFact.lean MFT15 MultiFact.binaryV iews_maxColorableCard_one_eq_two Ssot/MultiFact.lean MFT16 MultiFact.pair_product_success_card_lower_bound Ssot/MultiFact.lean MFT17 MultiFact.pair_product_independent_lower_bound Ssot/MultiFact.lean MFT18 MultiFact.exactOn_mass_le_maxColorableMass Ssot/MultiFact.lean MFT19 MultiFact.exists_exactOn_mass_eq_maxColorableMass Ssot/MultiFact.lean MFT20 MultiFact.colorable_iff_graphColorable Ssot/MultiFact.lean MFT21 MultiFact.block_power_independent_lo wer_bound Ssot/MultiFact.lean MFT22 MultiFact.block_add_independent_lower_bound Ssot/MultiFact.lean MFT23 MultiFact.exactRateDistortionV alue_upper Ssot/MultiFact.lean MFT24 MultiFact.exists_exactRateDistortionV alue Ssot/MultiFact.lean MFT25 MultiFact.pairConfusable_iff_strongProdAdj Ssot/MultiFact.lean MFT26 MultiFact.pairConfusabilityGraph_eq_strongProd Ssot/MultiFact.lean MFT29 MultiFact.concatBlockConfusable_iff_strongProdAdj Ssot/MultiFact.lean MFT30 MultiFact.blockConfusabilityGraph_addIsoStrongProd Ssot/MultiFact.lean MFT31 MultiFact.blockMaxIndependentCard_eq_graphMaxIndependentCard Ssot/MultiFact.lean MFT32 MultiFact.shannonLowerCapacity_eq_iSup_graphRates Ssot/MultiFact.lean MFT33 MultiFact.blockRate_le_blockRate_mul Ssot/MultiFact.lean MFT34 MultiFact.blockConfusable_iff_strongPo wAdj Ssot/MultiFact.lean MFT35 MultiFact.blockConfusabilityGraph_eq_strongPow Ssot/MultiFact.lean MFT36 MultiFact.shannonLowerCapacity_eq_graphShannonLo werCapacity Ssot/MultiFact.lean MFT38 MultiFact.strongPow_addIsoStrongProd Ssot/MultiFact.lean MFT39 MultiFact.graphMaxIndependentCard_strongPow_add_eq_strongProd Ssot/MultiFact.lean MFT40 MultiFact.graphMaxIndependentCard_strongProd_lower_bound Ssot/MultiFact.lean MFT41 MultiFact.graphMaxIndependentCard_strongPow_mul_lo wer_bound Ssot/MultiFact.lean MFT42 MultiFact.graphPowerRate_le_graphPo werRate_mul Ssot/MultiFact.lean MFT43 MultiFact.graphNegLogSeq_subadditiv e Ssot/MultiFact.lean MFT44 MultiFact.tendsto_graphPowerRateNat_graphShannonCapacityReal Ssot/MultiFact.lean MFT45 MultiFact.graphPowerRate_le_graphShannonCapacityReal Ssot/MultiFact.lean MFT46 MultiFact.graphShannonCapacityReal_nonneg Ssot/MultiFact.lean MFT47 MultiFact.iSup_graphPowerRate_eq_graphShannonCapacityReal Ssot/MultiFact.lean MFT48 MultiFact.shannonCapacityReal_eq_iSup_blockRate Ssot/MultiFact.lean MFT49 MultiFact.blockRate_le_shannonCapacityReal Ssot/MultiFact.lean MFT50 MultiFact.compl_strongPow_adj_if f_exists Ssot/MultiFact.lean MFT51 MultiFact.complStrongPow_colorable Ssot/MultiFact.lean MFT52 MultiFact.graphMaxIndependentCard_strongPow_le_complChromaticPo w Ssot/MultiFact.lean MFT53 MultiFact.graphPowerRate_le_log_complChromatic Ssot/MultiFact.lean MFT54 MultiFact.graphShannonCapacityReal_le_log_complChromatic Ssot/MultiFact.lean MFT55 MultiFact.graphShannonCapacityReal_le_log_complChromaticNumber Ssot/MultiFact.lean MFT56 MultiFact.graphShannonLowerCapacity_eq_ofReal_graphShannonCapacityReal Ssot/MultiFact.lean MFT57 MultiFact.graphThetaPSDLogUpper_eq_graphThetaLogUpper Ssot/MultiFact.lean MFT58 MultiFact.graphShannonCapacityReal_le_graphThetaPSDLogUpper Ssot/MultiFact.lean MFT59 MultiFact.graphThetaPSDLogUpper_strongProd_le Ssot/MultiFact.lean MFT60 MultiFact.graphThetaPSDLogSeq_subadditive Ssot/MultiFact.lean 22 MFT61 MultiFact.tendsto_graphThetaPSDPowerRateNat_graphThetaPSD AsymptoticUpper Ssot/MultiFact.lean MFT62 MultiFact.graphThetaPSDAsymptoticUpper_le_graphThetaPSDLogUpper Ssot/MultiFact.lean MFT63 MultiFact.iInf_graphThetaPSDPowerRate_eq_graphThetaPSD AsymptoticUpper Ssot/MultiFact.lean MFT64 MultiFact.lovaszOrthoLogUpper_eq_graphThetaLogUpper Ssot/MultiFact.lean MFT65 MultiFact.lovaszThetaPrimalLogUpper_eq_graphThetaPSDLogUpper Ssot/MultiFact.lean MFT66 MultiFact.graphThetaLiftedDualLogUpper_eq_graphThetaSchurDualLogUpper Ssot/MultiFact.lean MFT67 MultiFact.graphOneShotLogMaxIndependentCard_le_graphThetaLogUpper Ssot/MultiFact.lean MFT68 MultiFact.graphPowerRate_le_graphThetaPSDPo werRate Ssot/MultiFact.lean MFT69 MultiFact.graphShannonCapacityReal_le_graphThetaPSDAsymptoticUpper Ssot/MultiFact.lean MFT70 MultiFact.lovaszThetaPrimalAsymptoticUpper_eq_graphThetaPSD AsymptoticUpper Ssot/MultiFact.lean MFT71 MultiFact.graphShannonCapacityReal_le_lovaszThetaPrimalAsymptoticUpper Ssot/MultiFact.lean MFT72 MultiFact.graphThetaSchurDualLogUpper_strongProd_le Ssot/MultiFact.lean MFT73 MultiFact.graphThetaSchurDualLogUpper_iso_eq Ssot/MultiFact.lean MFT74 MultiFact.graphThetaSchurDualLogSeq_subadditive Ssot/MultiFact.lean MFT75 MultiFact.tendsto_graphThetaSchurDualPowerRateNat_graphThetaSchurDualAsymptoticUpper Ssot/MultiFact.lean MFT76 MultiFact.iInf_graphThetaSchurDualPowerRate_eq_graphThetaSchurDualAsymptoticUpper Ssot/MultiFact.lean MFT77 MultiFact.graphThetaSchurDualAsymptoticUpper_le_graphThetaPSDAsymptoticUpper Ssot/MultiFact.lean MFT78 MultiFact.graphThetaSchurDualAsymptoticUpper_le_lovaszThetaPrimalAsymptoticUpper Ssot/MultiFact.lean MFT79 MultiFact.lovaszThetaDualLogUpper_eq_graphThetaSchurDualLogUpper Ssot/MultiFact.lean MFT80 MultiFact.lovaszThetaDualAsymptoticUpper_eq_graphThetaSchurDualAsymptoticUpper Ssot/MultiFact.lean MFT81 MultiFact.lovaszThetaDualAsymptoticUpper_le_lo vaszThetaPrimalAsymptoticUpper Ssot/MultiFact.lean MFT82 MultiFact.lovaszThetaDualLogUpper_eq_lo vaszThetaPrimalLogUpper Ssot/MultiFact.lean MFT83 MultiFact.lovaszThetaDualAsymptoticUpper_eq_lo vaszThetaPrimalAsymptoticUpper Ssot/MultiFact.lean MFT84 MultiFact.graphShannonCapacityReal_le_lovaszThetaAsymptoticUpper Ssot/MultiFact.lean MFT85 MultiFact.graphShannonCapacityReal_eq_lovaszThetaAsymptoticUpper_clusterGraph Ssot/MultiFact.lean MFT86 MultiFact.graphShannonCapacityReal_eq_log_card_clusterGraph Ssot/MultiFact.lean MFT87 MultiFact.lovaszThetaAsymptoticUpper_eq_log_card_clusterGraph Ssot/MultiFact.lean MFT88 MultiFact.confusabilityGraph_eq_clusterGraph_transcriptLabel Ssot/MultiFact.lean MFT89 MultiFact.shannonCapacityReal_eq_shannonLovaszThetaAsymptoticUpper_of_fiberCoherent Ssot/MultiFact.lean MFT90 MultiFact.shannonCapacityReal_eq_log_transcriptFiberCard_of_fiberCoherent Ssot/MultiFact.lean MFT91 MultiFact.shannonLovaszThetaAsymptoticUpper_eq_log_transcriptFiberCard_of_fiberCoherent Ssot/MultiFact.lean MFT96 MultiFact.shannonLovaszThetaAsymptoticUpper_eq_log_connectedComponents_of_confusableT ransitive Ssot/MultiFact.lean MFT100 MultiFact.shannonCapacityReal_eq_shannonLovaszThetaAsymptoticUpper_of_meetW itnessed Ssot/MultiFact.lean MFT101 MultiFact.shannonCapacityReal_eq_log_connectedComponents_of_meetWitnessed Ssot/MultiFact.lean MFT102 MultiFact.shannonLovaszThetaAsymptoticUpper_eq_log_connectedComponents_of_meetW itnessed Ssot/MultiFact.lean MFT107 MultiFact.V iewCompositionClosed Ssot/MultiFact.lean MFT108 MultiFact.confusableTransiti ve_iff_vie wCompositionClosed Ssot/MultiFact.lean MFT109 MultiFact.confusable_iff_exists_vie wClusterAdj Ssot/MultiFact.lean MFT110 MultiFact.SupportConfusable Ssot/MultiFact.lean MFT111 MultiFact.deterministicSupportConfusable_iff_confusable Ssot/MultiFact.lean MFT112 MultiFact.deterministicSupportConfusabilityGraph_eq_confusabilityGraph Ssot/MultiFact.lean MFT113 MultiFact.card_state_eq Ssot/MultiFact.lean MFT114 MultiFact.confusableOracle_eq_true_iff Ssot/MultiFact.lean MFT115 MultiFact.confusableOrderedPairs Ssot/MultiFact.lean MFT116 MultiFact.card_state_prod_eq Ssot/MultiFact.lean MFT117 MultiFact.card_confusableOrderedPairs_le_naiv e_bound Ssot/MultiFact.lean MFT118 MultiFact.observe_eq_iff_subset_agreeSet Ssot/MultiFact.lean MFT119 MultiFact.confusable_iff_exists_vie w_subset_agreeSet Ssot/MultiFact.lean MFT120 MultiFact.agreeSet_statePerm_eq Ssot/MultiFact.lean MFT121 MultiFact.confusable_statePerm_iff Ssot/MultiFact.lean MFT122 MultiFact.confusable_congr_agreeSet Ssot/MultiFact.lean MFT123 MultiFact.confusabilityGraph_statePermIso Ssot/MultiFact.lean MFT124 MultiFact.exists_confusabilityGraph_iso_send Ssot/MultiFact.lean MFT125 MultiFact.agreementUpwardFamily_upwardClosed Ssot/MultiFact.lean MFT126 MultiFact.confusable_iff_agreeSet_mem_agreementUpwardF amily Ssot/MultiFact.lean MFT127 MultiFact.confusable_viewFamilyOfAgreementUpw ardFamily_iff Ssot/MultiFact.lean MFT128 MultiFact.exists_viewF amily_iff_agreementClassified Ssot/MultiFact.lean MFT129 MultiFact.agreeSet_eq_univ_if f Ssot/MultiFact.lean MFT130 MultiFact.exists_distinct_states_with_agreeSet_eq_iff Ssot/MultiFact.lean MFT131 MultiFact.upwardClosed_family_eq_on_proper_subsets_of_same_relation Ssot/MultiFact.lean MFT132 MultiFact.agreementOracle_eq_true_iff Ssot/MultiFact.lean MFT133 MultiFact.agreementOracle_eq_confusableOracle Ssot/MultiFact.lean MFT134 MultiFact.agreementOracleV iewW ork_le Ssot/MultiFact.lean MFT135 MultiFact.agreementOracleT otalWork_le Ssot/MultiFact.lean MFT136 MultiFact.meetWitnessed_implies_transiti ve_confusability Ssot/MultiFact.lean MFT137 MultiFact.transitive_confusability_implies_cluster_graph Ssot/MultiFact.lean OBS1 ObserverModel.exact_recovery_implies_pair_injecti ve paper1/Paper1IT/ObserverTagModel.lean OBS2 ObserverModel.pair_injective_implies_e xact_recovery paper1/Paper1IT/ObserverTagModel.lean OBS3 ObserverModel.fiber_card_le_tag_alphabet paper1/Paper1IT/ObserverTagModel.lean OBS4 ObserverModel.exact_recovery_global_count paper1/Paper1IT/ObserverTagModel.lean OBS5 ObserverModel.clique_card_le_tag_alphabet paper1/Paper1IT/ObserverTagModel.lean OM1 OM.DeterministicObservable.entropy_coarsen_le_sourceEntropy OM2 OM.clique_card_le_tag_alphabet OM3 OM.decodedOutputEntropy_fano_budgeted OM4 OM.decodedOutputEntropy_fano_observation_only OM5 OM.decodedOutputEntropy_le_mutualInfoDeterministic OM6 OM.decodedOutputEntropy_log_gap_nonneg OM7 OM.deterministicKernel_entropy_data_processing OM8 OM.exact_recovery_implies_pair_injecti ve OM9 OM.pair_injective_implies_exact_reco very OM10 OM.sourceEntropy_eq_observationT agEntropy_add_conditionalEntropyGivenP air OM11 OM.successSet_clique_entropy_le_log_tags PRB47 ObserverModel.sourceEntropy_eq_observationT agEntropy_add_conditionalEntropyGiv enPair paper1/Paper1IT/ProbabilisticFinite.lean PRB55 ObserverModel.successSet_clique_entropy_le_log_tags paper1/Paper1IT/FanoFinite.lean PRB68 ObserverModel.deterministicKernel_entropy_data_processing paper1/Paper1IT/ProbabilisticFinite.lean PRB73 ObserverModel.observationT agEntropy_gap_nonneg paper1/Paper1IT/ProbabilisticFinite.lean PRB74 ObserverModel.observationT agEntropy_gap_eq_zero_of_klDiv_zero_uniform paper1/Paper1IT/ProbabilisticFinite.lean PRB81 ObserverModel.decodedOutputEntropy_le_mutualInfoDeterministic paper1/Paper1IT/ProbabilisticFinite.lean PRB82 ObserverModel.decodedOutputEntropy_source_gap_nonneg paper1/Paper1IT/ProbabilisticFinite.lean PRB83 ObserverModel.decodedOutputEntropy_log_gap_nonneg paper1/Paper1IT/ProbabilisticFinite.lean PRB85 ObserverModel.decodedOutputEntropy_fano_budgeted paper1/Paper1IT/FanoFinite.lean PRB87 ObserverModel.decodedOutputEntropy_fano_observation_only paper1/Paper1IT/FanoFinite.lean PRB90 ObserverModel.DeterministicObservable.entropy_coarsen_le_sourceEntropy paper1/Paper1IT/ProbabilisticFinite.lean PRB93 ObserverModel.decodedOutputEntropy_gap_eq_zero_of_klDiv_zero_uniform paper1/Paper1IT/ProbabilisticFinite.lean PRB94 ObserverModel.decodedOutputEntropy_gap_pos_implies_klDiv_ne_zero paper1/Paper1IT/ProbabilisticFinite.lean PRB95 ObserverModel.decodedOutputObservable_entropy_le_sourceEntropy paper1/Paper1IT/ProbabilisticFinite.lean RA T1 ClaimClosure.rate_incoherence_step Ssot/ClaimClosure.lean RED1 ClaimClosure.redundancy_incoherence_equiv Ssot/ClaimClosure.lean REQ1 structural_facts_fixed_at_definition Ssot/Foundations.lean REQ2 definition_hooks_necessary Ssot/Requirements.lean REQ3 introspection_necessary_for_verification Ssot/Requirements.lean SOT1 ssot_if f Ssot/Completeness.lean TRI1 timing_trichotomy_exhaustive Ssot/Foundations.lean

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment