Automated Complexity Analysis Based on the Dependency Pair Method

Automated Complexity Analysis Based on the Dependency Pair Method
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This article is concerned with automated complexity analysis of term rewrite systems. Since these systems underlie much of declarative programming, time complexity of functions defined by rewrite systems is of particular interest. Among other results, we present a variant of the dependency pair method for analysing runtime complexities of term rewrite systems automatically. The established results significantly extent previously known techniques: we give examples of rewrite systems subject to our methods that could previously not been analysed automatically. Furthermore, the techniques have been implemented in the Tyrolean Complexity Tool. We provide ample numerical data for assessing the viability of the method.


💡 Research Summary

The paper addresses the problem of automatically estimating the runtime complexity of term rewrite systems (TRSs), which are the formal foundation of many declarative programming languages. While termination analysis of TRSs is a well‑studied area, deriving tight bounds on the number of rewrite steps needed to evaluate a function (i.e., its runtime complexity) is considerably more challenging. The authors propose a novel framework that adapts the Dependency Pair (DP) method—originally designed for proving termination—to the quantitative setting of complexity analysis.

The work begins by distinguishing two classic notions: derivational complexity, which measures the maximal length of any rewrite sequence starting from a term of size ≤ n, and runtime complexity, which restricts attention to basic terms (terms whose arguments are constructor‑only and already in normal form). This restriction mirrors the way algorithmic complexity is measured for functional programs: only the “input” part of a term is counted, while auxiliary arguments that are themselves results of previous rewrites are ignored. The authors formalise these concepts using the sets (T_b) (basic terms) and the innermost rewrite relation (i!\to_R).

To obtain polynomial upper bounds, the paper leverages matrix interpretations over the natural numbers. A matrix interpretation assigns to each function symbol a linear mapping defined by a matrix (F_i) and a constant vector (\vec f). By restricting the matrices to an upper‑triangular shape with diagonal entries at most 1, the authors guarantee that the interpretation of any term grows at most polynomially in the size of the term. They introduce the notion of a Triangular Matrix Interpretation (TMI) and prove that for a TMI of dimension (d), the (n)‑th power of the maximal matrix grows like (O(n^{d-1})). Consequently, the first component of the interpretation provides a G‑collapsible measure that can be used to bound derivation heights.

A central contribution is the Weight Gap Principle. In the DP framework, each rewrite rule gives rise to a set of dependency pairs that capture the recursive structure of the system. The weight gap principle states that if, for every DP, the interpretation of the left‑hand side exceeds that of the right‑hand side by at least a fixed positive amount (the “gap”), then the total number of DP steps in any innermost derivation is bounded by the initial weight divided by the gap. This principle allows the authors to reason modularly: they can analyse separate components of the DP graph, compute local gaps, and combine the results to obtain a global polynomial bound.

Another innovation is the introduction of Usable Replacement Maps (URMs). A replacement map (\mu) specifies, for each function symbol, which argument positions are allowed to be rewritten. By constructing URMs from the DP graph, the method discards rules that are never used in the evaluation of basic terms, thereby weakening the monotonicity constraints that would otherwise be required. This is especially powerful for duplicating TRSs, where a variable may appear more often on the right‑hand side than on the left‑hand side, a situation that traditionally leads to exponential derivational complexity. The combination of URMs with TMIs enables the analysis of many such systems that were previously out of reach.

The theoretical results are implemented in the Tyrolean Complexity Tool (TCT). The authors evaluate the tool on a comprehensive benchmark suite, including classic examples such as the division‑by‑two TRS (div2) that exhibits exponential derivational complexity but only linear runtime complexity. The experiments demonstrate that TCT can automatically derive linear or quadratic bounds for many systems where earlier tools either failed or produced only coarse exponential bounds. Moreover, the weight gap principle together with URMs significantly improves both the success rate and the tightness of the obtained bounds.

In conclusion, the paper extends the dependency pair methodology from a qualitative termination technique to a quantitative complexity analysis framework. By integrating triangular matrix interpretations, the weight gap principle, and usable replacement maps, the authors provide a powerful, fully automated approach that substantially broadens the class of TRSs for which polynomial runtime bounds can be automatically proved. The work opens avenues for further research, such as exploring higher‑dimensional or non‑linear matrix interpretations, refining the construction of replacement maps, and applying the technique to richer programming languages beyond pure term rewriting.


Comments & Academic Discussion

Loading comments...

Leave a Comment