Implicit complexity for coinductive data: a characterization of corecurrence

Implicit complexity for coinductive data: a characterization of   corecurrence

We propose a framework for reasoning about programs that manipulate coinductive data as well as inductive data. Our approach is based on using equational programs, which support a seamless combination of computation and reasoning, and using productivity (fairness) as the fundamental assertion, rather than bi-simulation. The latter is expressible in terms of the former. As an application to this framework, we give an implicit characterization of corecurrence: a function is definable using corecurrence iff its productivity is provable using coinduction for formulas in which data-predicates do not occur negatively. This is an analog, albeit in weaker form, of a characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar induction, TCS 318, 2004].


💡 Research Summary

The paper introduces a unified framework for reasoning about programs that manipulate both inductive and coinductive data, focusing on the implicit complexity of coinductive (corecursive) definitions. The authors adopt equational programs as their formal model: functions are specified by a set of equations that combine pattern matching, recursive calls, and corecursive constructions (e.g., stream cons). This representation treats the program itself as a logical object, allowing computation and proof to be expressed in the same language.

The central semantic notion is productivity (also called fairness). For a program that generates an infinite data structure, productivity guarantees that each finite observation step yields a concrete piece of data; in other words, the program never gets stuck in an unproductive loop. Rather than using the traditional bisimulation technique to certify productivity, the authors propose to prove productivity directly by coinduction on a restricted class of formulas. The restriction is that data‑predicates (predicates that talk about the shape or content of data) must not occur negatively (i.e., under a negation) in the coinductive hypothesis. This “negative‑free” condition mirrors Leivant’s unipolar induction for primitive recursion on inductive data.

The main technical contribution consists of two theorems establishing an exact correspondence between corecursive definability and provable productivity under the negative‑free coinduction scheme.

  1. Corecursion ⇒ Provable Productivity.
    Any function defined by a corecursive scheme automatically yields a productive program: each unfolding step of the corecursive equation produces a finite observable fragment. The authors formalize this intuition by translating each corecursive clause into a coinductive rule that satisfies the productivity predicate. Consequently, a corecursive definition can be turned into a coinductive proof that the function is productive.

  2. Provable Productivity ⇒ Corecursion.
    Conversely, if a function’s productivity can be proved by a coinductive argument that uses only negative‑free formulas, then one can extract a corecursive definition of the function. The extraction proceeds by analysing the structure of the coinductive proof: the hypothesis provides a recipe for constructing the next observable piece, while the negative‑free restriction guarantees that this recipe does not depend on any “future” negated information. The resulting recipe can be expressed as a set of equational corecursive clauses, thereby showing that the function is corecursively definable.

Together, these theorems give an implicit characterization of corecursion: a function belongs to the corecursive class precisely when its productivity is provable in the restricted coinductive logic. This mirrors Leivant’s characterization of primitive recursion, where a function is primitive recursive iff its totality can be proved by induction on negative‑free formulas. The present result is weaker in the sense that the restriction to negative‑free formulas is essential; allowing negative occurrences would break the equivalence.

From a complexity‑theoretic perspective, corecursive functions are naturally “polytime‑friendly”: each step of a corecursive computation produces a bounded amount of data, and the overall runtime is essentially linear in the number of produced observations. Hence, the productivity proof not only certifies correctness but also implicitly bounds the computational complexity of the function.

The paper also discusses practical implications. Because equational programs are syntactically simple, they can be embedded in existing proof assistants such as Coq or Agda. A productivity checker based on the negative‑free coinduction rule could be automated, providing developers with immediate feedback on whether a corecursive definition is well‑behaved. Moreover, the extraction procedure from a productivity proof to a corecursive definition could be turned into a program transformation tool, enabling the systematic conversion of naïve recursive code into safe corecursive code for infinite data streams.

Finally, the authors outline future work: extending the framework to allow limited negative occurrences (e.g., via guarded recursion), refining the implicit complexity measures (e.g., quantifying the rate of observation generation), and integrating the approach into mainstream functional languages to support large‑scale, verified, coinductive programming. In sum, the paper provides a solid theoretical bridge between coinductive reasoning, productivity, and implicit complexity, offering both a conceptual advance and a pathway toward practical verification tools for infinite‑data programs.