A type-theoretical approach to Universal Grammar
The idea of Universal Grammar (UG) as the hypothetical linguistic structure shared by all human languages harkens back at least to the 13th century. The best known modern elaborations of the idea are
The idea of Universal Grammar (UG) as the hypothetical linguistic structure shared by all human languages harkens back at least to the 13th century. The best known modern elaborations of the idea are due to Chomsky. Following a devastating critique from theoretical, typological and field linguistics, these elaborations, the idea of UG itself and the more general idea of language universals stand untenable and are largely abandoned. The proposal tackles the hypothetical contents of UG using dependent and polymorphic type theory in a framework very different from the Chomskyan ones. We introduce a type logic for a precise, universal and parsimonious representation of natural language morphosyntax and compositional semantics. The logic handles grammatical ambiguity (with polymorphic types), selectional restrictions and diverse kinds of anaphora (with dependent types), and features a partly universal set of morphosyntactic types (by the Curry-Howard isomorphism).
💡 Research Summary
The paper confronts the long‑standing hypothesis of Universal Grammar (UG) by abandoning the Chomskyan tradition, which has been severely weakened by typological, field‑work, and theoretical critiques, and instead proposes a formal reconstruction of UG using modern type‑theoretic tools—specifically dependent types and polymorphic types. The authors begin with a historical overview that situates UG in a lineage dating back to the thirteenth century, then detail why the classic “innate set of grammatical rules” model no longer enjoys empirical support. They argue that the failure is not of the idea of universals per se, but of the particular formal machinery that has been used to capture them.
The core of the contribution is a type logic that simultaneously encodes morphosyntax and compositional semantics. In this logic, every lexical item and every syntactic constituent is assigned a type. Basic syntactic categories (Noun, Verb, Adjective, Preposition, etc.) form a “partly universal set of morphosyntactic types.” Languages differ by extending this core set with language‑specific types (e.g., Korean particles) and by providing additional type‑level constraints.
Dependent types are employed to model selectional restrictions and anaphoric dependencies. For instance, the verb “eat” is given the dependent type Πx:Food. V(x), meaning that it can only combine with an argument of type Food. This eliminates the need for ad‑hoc semantic feature checking: the type system itself guarantees that only semantically compatible arguments can be combined. Anaphoric relations (pronouns, reflexives, etc.) are likewise captured by dependent types that bind a pronoun’s type to the antecedent’s type, ensuring correct coreference through type‑level unification.
Polymorphic types address grammatical ambiguity. When a phrase can serve multiple syntactic or semantic roles, it is assigned a polymorphic type that can be instantiated to any of the admissible concrete types depending on context. This approach replaces the traditional “ambiguity‑resolution rules” with a uniform type‑instantiation mechanism, preserving logical consistency while allowing flexibility.
A pivotal theoretical move is the exploitation of the Curry‑Howard isomorphism. Syntactic derivations are interpreted as proofs, and semantic composition corresponds to proof normalization. Consequently, a well‑formed sentence is a provable proposition, and its meaning is the normal form of the associated proof. This unifies syntax and semantics within a single deductive framework, offering a clear computational interpretation: parsing becomes type checking, and semantic interpretation becomes proof reduction.
The authors also sketch an implementation scenario. A parser built on this type logic would first assign provisional types to input tokens, then perform type inference to construct a proof tree (the syntactic structure). During proof reduction, selectional restrictions are enforced, anaphoric links are resolved, and ambiguous constituents are specialized via polymorphic instantiation. The resulting normal form yields a compositional semantic representation. Compared with conventional rule‑based parsers, this system guarantees logical soundness; compared with statistical models, it provides transparent, explainable analyses.
In the discussion, the paper emphasizes that the proposed “type‑theoretic UG” does not claim a monolithic, language‑independent grammar. Instead, it posits a minimal core of morphosyntactic types shared by all human languages, plus a principled mechanism for language‑specific extensions. This captures both the universality that UG seeks to explain and the empirical diversity observed across languages. The authors suggest several avenues for future work: extending the type system to handle discourse‑level phenomena, integrating probabilistic information to guide polymorphic instantiation, and testing the framework on typologically diverse corpora.
Overall, the paper offers a rigorous, parsimonious alternative to the Chomskyan UG, demonstrating how dependent and polymorphic type theory can provide a unified, computationally tractable account of universal linguistic structure, selectional constraints, anaphora, and ambiguity. It bridges the gap between formal linguistic theory and practical natural‑language processing, opening a promising research direction for both theoretical linguists and AI practitioners.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...