A Minimal Propositional Type Theory
Propositional type theory, first studied by Henkin, is the restriction of simple type theory to a single base type that is interpreted as the set of the two truth values. We show that two constants (f
Propositional type theory, first studied by Henkin, is the restriction of simple type theory to a single base type that is interpreted as the set of the two truth values. We show that two constants (falsity and implication) suffice for denotational and deductive completeness. Denotational completeness means that every value of the full set-theoretic type hierarchy can be described by a closed term. Deductive completeness is shown for a sequent-based proof system that extends a propositional natural deduction system with lambda conversion and Boolean replacement.
💡 Research Summary
The paper revisits propositional type theory (PTT), a restriction of simple type theory in which there is a single base type interpreted as the Boolean set {false, true}. Historically, PTT has been studied using a fairly rich collection of primitive constants and logical connectives, but the authors ask a more economical question: how few constants are sufficient to achieve both denotational completeness (every semantic object can be denoted by a closed term) and deductive completeness (every semantically valid sequent is provable)? Their answer is strikingly minimal: only two constants, falsity (⊥) and implication (→), together with the usual λ‑calculus machinery, suffice.
The technical development proceeds in several stages. First, the syntax is defined: terms are built from variables, λ‑abstraction, application, and the two constants ⊥ and →. Types are generated from the base type B (interpreted as the two truth values) by the usual arrow constructor. The semantic interpretation ⟦·⟧ maps each closed term of type τ to an element of the set‑theoretic interpretation of τ, i.e., a Boolean value for B and a set‑theoretic function for arrow types. The interpretation of → is the usual Boolean implication, encoded as a λ‑term that returns true unless its antecedent is true and its consequent is false.
The core of the denotational completeness proof is an inductive construction that shows, for every type τ and every semantic object v ∈ ⟦τ⟧, there exists a closed λ‑term t of type τ such that ⟦t⟧ = v. The base case (τ = B) relies on a classic result: any Boolean function can be expressed using only falsity and implication. The authors give an explicit normal‑form translation: a Boolean function f of n arguments is represented as a disjunction of conjunctions, each conjunction being a chain of implications that reproduces the truth table of f. Since conjunction and disjunction can be defined from ⊥ and → (e.g., p ∧ q ≡ ¬(p → ¬q), p ∨ q ≡ (¬p) → q), the construction stays within the two‑constant language. The inductive step handles function types σ → τ by assuming a term that denotes each possible argument of type σ and then abstracting over it to produce a term of type σ → τ that yields the appropriate τ‑value for each argument. This recursion yields a term for any higher‑order object, establishing full denotational completeness.
On the proof‑theoretic side, the authors introduce a sequent calculus that extends a standard natural‑deduction system for propositional logic. The basic rules are the usual introduction and elimination rules for →, the ex falso rule for ⊥, and a structural rule for weakening. Two additional rules are crucial: (1) λ‑conversion, which allows β‑reduction and η‑expansion within proofs, ensuring that syntactically different but extensionally equal terms can be interchanged; and (2) Boolean replacement, a rule that permits the substitution of any two terms that denote the same Boolean value. This latter rule compensates for the scarcity of primitive connectives: whenever a proof needs to replace a sub‑formula with an equivalent one (e.g., swapping p → q with ¬p ∨ q), Boolean replacement supplies the necessary step without having to introduce new constants.
The deductive completeness theorem is proved by first establishing a normal‑form theorem: any derivation can be transformed, using λ‑conversion and Boolean replacement, into a derivation where every formula is in a canonical form built solely from ⊥, →, variables, and λ‑abstractions. Then, using the denotational completeness construction, the authors show that any semantically valid sequent Γ ⊢ φ can be turned into a syntactic proof. The argument proceeds by induction on the structure of φ and on the complexity of the types involved. For atomic formulas the Boolean replacement rule directly yields a proof; for implications, the usual →‑introduction and →‑elimination rules are employed, guided by the normal form of the premises. The ex falso rule handles the case where ⊥ appears in the antecedent. Throughout, λ‑conversion guarantees that the terms used in the proof correspond exactly to the denotations constructed in the semantic part. Consequently, every semantically valid sequent is provable in the system, establishing deductive completeness.
The significance of these results is twofold. Conceptually, they demonstrate that propositional type theory can be reduced to an extremely austere language without loss of expressive power or proof strength. Practically, the minimalism has implications for the design of automated theorem provers and proof assistants: a smaller set of primitives simplifies term representation, reduces the search space for proof search, and eases the implementation of normalization procedures. Moreover, the Boolean replacement rule offers a flexible mechanism for handling equivalence transformations that would otherwise require a richer logical vocabulary.
In the concluding discussion, the authors suggest several avenues for future work. One direction is to investigate whether even a single constant (e.g., only ⊥) could suffice when combined with more sophisticated λ‑terms, potentially leading to a yet more parsimonious foundation. Another is to extend the approach to richer type theories, such as those incorporating quantifiers or dependent types, to see whether similar minimality results can be achieved. Finally, they propose exploring the computational complexity of proof search in this reduced system, as the scarcity of connectives may both simplify and complicate algorithmic aspects of proof construction. Overall, the paper provides a clear, technically rigorous demonstration that a minimal propositional type theory is both semantically and deductively complete, opening the door to leaner logical frameworks and more efficient formal reasoning tools.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...