Foundations for an Abstract Proof Theory in the Context of Horn Rules
We introduce a novel, logic-independent framework for the study of sequent-style proof systems, which covers a number of proof-theoretic formalisms and concrete proof systems that appear in the literature. In particular, we introduce a generalized form of sequents, dubbed ‘g-sequents,’ which are taken to be binary graphs of typical, Gentzen-style sequents. We then define a variety of ‘inference rule types’ as sets of operations that act over such objects, and define ‘abstract (sequent) calculi’ as pairs consisting of a set of g-sequents together with a finite set of operations. Our approach permits an analysis of how certain inference rule types interact in a general setting, demonstrating under what conditions rules of a specific type can be permuted with or simulated by others, and being applicable to any sequent-style proof system that fits within our framework. We then leverage our permutation and simulation results to establish generic calculus and proof transformation algorithms, which show that every abstract calculus can be effectively transformed into a lattice of polynomially equivalent abstract calculi. We determine the complexity of computing this lattice and compute the relative sizes of proofs and sequents within distinct calculi of a lattice. We recognize that top and bottom elements in lattices correspond to many known deep-inference nested sequent systems and labeled sequent systems (respectively) for logics characterized by Horn properties.
💡 Research Summary
The paper proposes a highly abstract, logic‑independent framework for studying sequent‑style proof systems, with a particular focus on logics whose semantics are characterized by Horn properties. The authors begin by generalizing the traditional Gentzen sequent into a graph‑based object called a “g‑sequent”. In a g‑sequent, each vertex is a standard Gentzen‑style sequent, while edges form an arbitrary binary relation; the framework does not prescribe any semantic interpretation of the edges, allowing the same syntactic structure to model labeled sequents (where edges encode accessibility), nested sequents (where edges encode tree nesting), linear nested sequents, and other variants.
On top of this graph representation, the authors introduce “inference rule types”. Each rule type is parameterized by two families of constraints: sequent constraints (which restrict the formulae appearing in the vertices) and structural constraints (which restrict how edges may be added, removed, or rearranged). Concrete inference rules are obtained by instantiating these parameters. This abstraction subsumes the usual structural rules of labeled calculi, the deep‑inference rules of nested calculi, and the propagation/reachability rules that appear in Horn‑based systems.
An “abstract calculus” is then defined as a pair consisting of a set of g‑sequents (the language) together with a finite set of inference rule types (the operational component). Because the definition makes no reference to any particular logical language, any sequent‑style system that can be encoded as g‑sequents falls under the umbrella, including a wide variety of modal, intuitionistic, bi‑intuitionistic, provability, and tense logics whose frame conditions are Horn clauses.
The core technical contribution is a systematic study of how different rule types interact. The authors formalize two fundamental relations: permutation, where two rule applications can be swapped without affecting the final conclusion, and simulation, where one rule type can be replaced by a composition of others. Building on these notions they define two novel operations on rule types: absorption, which strengthens a rule so that it can simulate another, and fracturing, which decomposes a complex rule into simpler components. These operations enable the elimination of structural rules (e.g., those that manipulate edges) and the introduction of reachability/propagation rules, a process they term “structural refinement”.
A striking result is that every abstract calculus belongs to a finite lattice of “polynomially equivalent” calculi. The lattice is generated by repeatedly applying absorption and fracturing, together with permutation and simulation steps. The top element of the lattice corresponds to an implicit calculus, characterized by deep inference and reachability rules; concrete instances include many nested‑sequent systems with propagation rules. The bottom element is an explicit calculus, where all structural information is made explicit in the sequents; this matches the family of labeled‑sequent calculi. The authors provide algorithms to compute the entire lattice in polynomial time, and they prove that moving between any two nodes of the lattice preserves proof size up to a polynomial factor. Consequently, proof‑size blow‑up or shrinkage caused by translating between, say, a labeled system and a nested system is tightly bounded.
The paper also situates its contributions within the existing literature. Prior works on “structural refinement” demonstrated case‑by‑case transformations between labeled and nested systems for specific logics (e.g., provability, tense, or modal logics). The present framework abstracts those case studies, showing that the same transformation principles arise from the generic properties of rule types. Moreover, the authors discuss how their graph‑based approach differs from other graph‑oriented proof frameworks: unlike labeled sequents, g‑sequents are purely syntactic and do not commit to a semantic reading of edges, which grants greater flexibility.
In summary, the authors deliver:
- A unified syntactic representation (g‑sequents) that captures a broad spectrum of sequent‑style formalisms.
- A parameterized notion of inference rule types, together with formal notions of permutation, simulation, absorption, and fracturing.
- Proof‑theoretic results showing that any abstract calculus can be transformed into a lattice of polynomially equivalent calculi, with explicit algorithms and complexity bounds.
- Concrete identification of the lattice’s top (deep‑inference, reachability‑rich) and bottom (explicit, labeled) elements, explaining the duality observed between many known nested and labeled systems.
- A toolkit that can be applied to automate the translation, optimization, and complexity analysis of proof systems across a wide range of Horn‑characterized logics.
Overall, the work offers a powerful, logic‑agnostic methodology for analyzing, comparing, and transforming sequent‑style proof systems, with immediate implications for automated reasoning, proof search optimization, and the design of new logical calculi.
Comments & Academic Discussion
Loading comments...
Leave a Comment