Ramified Structural Recursion and Corecursion
We investigate feasible computation over a fairly general notion of data and codata. Specifically, we present a direct Bellantoni-Cook-style normal/safe typed programming formalism, RS1, that expresses feasible structural recursions and corecursions over data and codata specified by polynomial functors. (Lists, streams, finite trees, infinite trees, etc. are all directly definable.) A novel aspect of RS1 is that it embraces structure-sharing as in standard functional-programming implementations. As our data representations use sharing, our implementation of structural recursions are memoized to avoid the possibly exponentially-many repeated subcomputations a naive implementation might perform. We introduce notions of size for representations of data (accounting for sharing) and codata (using ideas from type-2 computational complexity) and establish that type-level 1 RS1-functions have polynomial-bounded runtimes and satisfy a polynomial-time completeness condition. Also, restricting RS1 terms to particular types produces characterizations of some standard complexity classes (e.g., omega-regular languages, linear-space functions) and some less-standard classes (e.g., log-space streams).
💡 Research Summary
The paper introduces RS1, a Bellantoni‑Cook‑style normal/safe typed programming language that captures feasible (polynomial‑time) computation over a broad class of data and codata defined by polynomial functors. Data types such as lists, finite trees, and streams, as well as codata like infinite streams or trees, are directly representable. A key novelty is that RS1 embraces structure‑sharing: values are represented as directed acyclic graphs (DAGs) rather than trees, mirroring the sharing performed by modern functional language implementations. Because recursive calls are evaluated on shared sub‑structures, the operational semantics memoizes each sub‑computation, guaranteeing that no sub‑term is evaluated more than once. This eliminates the exponential blow‑up that a naïve structural recursion could incur on highly shared inputs.
The type system splits arguments into normal and safe zones, exactly as in Bellantoni‑Cook. Normal arguments may be inspected freely but cannot increase the size of recursive arguments; safe arguments are the only ones that may be passed recursively, thereby controlling size growth. This discipline yields a syntactic guarantee that any well‑typed RS1 function of level‑1 (i.e., whose output type does not contain higher‑order codata) runs in time polynomial in the size of its input representation.
Two notions of size are defined to make the polynomial bound precise. For data, the graph size counts the number of distinct nodes in the DAG, thus reflecting sharing. For codata, the authors adopt a type‑2 complexity perspective: the observer size measures the amount of information that can be extracted by a finite‑time observer, which is sufficient because codata are accessed only through a bounded number of observations in any feasible computation. With these measures, the authors prove:
- Time‑Complexity Theorem – Every level‑1 RS1 function terminates in time bounded by a polynomial in the graph size of its input and the observer size of any codata arguments.
- Polynomial‑Time Completeness – For every function computable in deterministic polynomial time, there exists an equivalent RS1 program (possibly after encoding the input as a suitable data term).
- Class Characterisations – By restricting the types of RS1 terms, one obtains exact characterisations of several well‑known and less‑studied complexity classes:
- ω‑regular languages arise when RS1 is used to define stream‑producing recognisers.
- Linear‑space functions correspond to programs whose outputs are finite trees and whose recursion depth is linearly bounded.
- Log‑space streams emerge when codata are infinite streams but each step uses only logarithmic workspace, a novel class captured naturally by RS1’s safe‑stream type.
The paper also discusses implementation aspects. The memoisation mechanism can be realised by the usual graph reduction techniques of functional languages (e.g., Haskell’s lazy evaluation). The authors provide an algorithm for computing the graph size of a value at run‑time, enabling dynamic monitoring of resource usage.
In comparison with earlier feasible recursion frameworks (Safe Recursion on Notation, Predicative Recursion, etc.), RS1 is more expressive because it allows arbitrary sharing and codata while retaining a simple, syntax‑driven complexity guarantee. Moreover, the use of polynomial functors makes the language modular: new data structures can be added by supplying their functorial description without altering the underlying complexity analysis.
Overall, the work bridges the gap between theoretical feasible computation over inductive/coinductive structures and practical functional programming practice. By integrating sharing‑aware memoisation, a dual notion of size, and a Bellantoni‑Cook‑style type discipline, RS1 offers a robust platform for writing provably polynomial‑time programs that manipulate both finite and infinite data, and it yields fresh characterisations of complexity classes that were previously inaccessible to syntactic recursion schemes.
Comments & Academic Discussion
Loading comments...
Leave a Comment