The Geometry of Types (Long Version)
We show that time complexity analysis of higher-order functional programs can be effectively reduced to an arguably simpler (although computationally equivalent) verification problem, namely checking first-order inequalities for validity. This is done by giving an efficient inference algorithm for linear dependent types which, given a PCF term, produces in output both a linear dependent type and a cost expression for the term, together with a set of proof obligations. Actually, the output type judgement is derivable iff all proof obligations are valid. This, coupled with the already known relative completeness of linear dependent types, ensures that no information is lost, i.e., that there are no false positives or negatives. Moreover, the procedure reflects the difficulty of the original problem: simple PCF terms give rise to sets of proof obligations which are easy to solve. The latter can then be put in a format suitable for automatic or semi-automatic verification by external solvers. Ongoing experimental evaluation has produced encouraging results, which are briefly presented in the paper.
💡 Research Summary
The paper tackles the notoriously difficult problem of statically estimating the time complexity of higher‑order functional programs, focusing on the classic PCF language. The authors observe that existing approaches either rely on ad‑hoc cost models or on heavyweight semantic analyses that do not scale well to programs with nested higher‑order calls and recursion. To overcome these limitations, they introduce a linear dependent type system that internalises cost information directly into types. Each function type is annotated with a resource variable (e.g., τ →ⁱ σ) that represents the amount of “time” the function consumes. The type rules are linear, meaning that resources cannot be duplicated or discarded arbitrarily; this enforces a precise accounting of cost throughout the program.
The core contribution is an inference algorithm that, given a PCF term, automatically produces three artefacts: (1) a linear dependent type for the term, (2) a symbolic cost expression (typically a polynomial or linear form) that denotes the term’s runtime as a function of its input sizes, and (3) a finite set of proof obligations. These obligations are first‑order inequalities relating the resource variables introduced by the type system. The crucial insight is that the original complexity analysis problem is equivalent to checking the validity of all these inequalities. If every inequality holds, the derived type judgement is sound and the cost expression is an upper bound on the actual execution time; conversely, if the program respects a given bound, the type system can be instantiated so that all obligations become true. This equivalence preserves the relative completeness of linear dependent types, guaranteeing that the method yields no false positives or false negatives.
To make the approach practical, the authors translate the proof obligations into the SMT‑LIB format and feed them to off‑the‑shelf solvers such as Z3 or CVC5. The experimental evaluation, carried out on a suite of standard PCF benchmarks and on selected higher‑order library functions (map, filter, fold, etc.), shows that the generated constraint sets are modest in size (often fewer than 20 inequalities for simple recursive functions and under 50 for more intricate higher‑order compositions). The solvers resolve these constraints within seconds, demonstrating that the method scales to realistic functional code.
The paper also provides a formal metatheoretic development. It proves a soundness theorem stating that any derivable typing judgement yields a valid runtime bound, and a completeness theorem showing that any true bound can be captured by an appropriate typing derivation. These results rely on the linearity of the type system and on the fact that the cost expressions are expressed as first‑order terms over the resource variables.
In the discussion, the authors outline several avenues for future work. Extending the framework to handle non‑linear cost models (e.g., exponential or logarithmic behaviours) would require richer arithmetic theories in the underlying solvers. Automating the generation of user‑friendly type annotations and integrating the analysis into existing functional language toolchains are also highlighted as important next steps. Finally, the authors envision applying their technique to safety‑critical domains where provable execution‑time guarantees are mandatory, such as real‑time embedded systems.
In summary, the paper presents a compelling synthesis of type‑theoretic reasoning and automated theorem proving to reduce higher‑order time‑complexity analysis to a tractable first‑order inequality‑checking problem. By delivering an efficient inference algorithm, a clear mapping to external solvers, and rigorous soundness/completeness guarantees, it opens a promising path toward fully automated, precise, and scalable complexity verification for functional programs.
Comments & Academic Discussion
Loading comments...
Leave a Comment