Ackermannian and Primitive-Recursive Bounds with Dicksons Lemma
Dickson’s Lemma is a simple yet powerful tool widely used in termination proofs, especially when dealing with counters or related data structures. However, most computer scientists do not know how to derive complexity upper bounds from such termination proofs, and the existing literature is not very helpful in these matters. We propose a new analysis of the length of bad sequences over (N^k,\leq) and explain how one may derive complexity upper bounds from termination proofs. Our upper bounds improve earlier results and are essentially tight.
💡 Research Summary
The paper investigates the quantitative side of Dickson’s Lemma, a cornerstone result stating that the product order on ℕᵏ is a well‑quasi‑ordering. While the lemma is routinely used to prove termination of counter‑based programs, model‑checking procedures, and other algorithmic constructions, the literature offers little guidance on extracting concrete complexity bounds from such termination arguments.
The authors introduce the notion of a controlled bad sequence. A sequence x₀, x₁, … over ℕᵏ is called bad if it contains no increasing pair (i.e., no i<j with xᵢ ≤ xⱼ). To avoid pathological “jumps” that can make bad sequences arbitrarily long, they fix an increasing control function f:ℕ→ℕ (with f(0)>0) and a shift parameter t∈ℕ. A sequence is t‑controlled if every element’s infinity‑norm satisfies ‖xᵢ‖∞ < f(i+t). For a given dimension k, control f and shift t they define Lₖ,f(t) as the maximal length of a t‑controlled bad sequence.
A key technical innovation is to work not directly with ℕᵏ but with sums of powers of ℕ. Any finite multiset τ of natural numbers denotes a type; the corresponding set N_τ is the disjoint union of ℕ^{k} for each k∈τ. This representation allows the authors to decompose a bad sequence into regions where a particular coordinate is fixed, thereby reducing the dimensionality of the problem. Concretely, for τ={k} they show
L_{ {k} }(t) ≤ 1 + L_{ N_k(t) × {k‑1} }(t+1)
where N_k(t)=k·(f(t)−1) is the number of regions induced by the first element. For a general type τ they define a transformation
τ_h(k,t) = τ \ {k} + N_k(t) × {k‑1}
and prove the master inequality
L_τ(t) ≤ max_{k∈τ} { 1 + L_{ τ_h(k,t) }(t+1) }.
Thus the maximal length can be computed inductively on the multiset structure of τ.
The second major contribution is to locate these inductively defined bounds within the Fast‑Growing Hierarchy (FGH). If the control function f belongs to level F_γ of the hierarchy, the recurrence above yields
L_{k,f}(t) ∈ F_{γ+k−1}.
In the important special case of a linear control f(x)=x+1, the bound simplifies to L_{k,f}(t) ∈ F_k. This improves on the classic result of McAloon (1984), which placed the bound at F_{k+1}, and matches the lower bound constructed by the authors, showing optimality.
The paper also extends the analysis to r‑bad sequences, i.e., sequences that avoid any increasing subsequence of length r+1. By a simple encoding they prove that the maximal length of a t‑controlled r‑bad sequence over a type τ equals L_{r×τ}(t), so the same machinery applies.
To demonstrate practical relevance, the authors present a “user guide” with three detailed case studies:
-
A simple two‑counter program – By modeling configurations as ℕ² and using f(x)=x+1, the maximal number of loop iterations (Time(a,b)) is shown to lie in F₂, an elementary bound.
-
Emptiness for increasing counter automata – The automaton’s state space is encoded as a type τ of moderate size; the derived bound falls into F₃, illustrating how the method yields non‑trivial complexity estimates for verification problems.
-
Karp‑Miller coverability trees – The construction of the tree is interpreted as generating a controlled bad sequence over a high‑dimensional type; the resulting bound places the overall algorithm in a specific FGH level, clarifying its theoretical complexity.
Throughout, the paper emphasizes the separation of two concerns: (i) the combinatorial decomposition of bad sequences (independent of f) and (ii) the placement of the resulting bound within the FGH (where f matters). This modular approach simplifies proofs, yields tighter bounds, and makes the technique accessible to practitioners who may not be experts in ordinal analysis.
In summary, the work provides a clean, self‑contained proof that for linear control functions the length of controlled bad sequences over ℕᵏ is exactly at level F_k of the Fast‑Growing Hierarchy, and more generally gives tight FGH bounds for arbitrary control functions. By coupling these theoretical results with concrete examples, the authors bridge the gap between termination proofs based on Dickson’s Lemma and explicit, meaningful complexity upper bounds, offering a valuable tool for both theorists and verification engineers.
Comments & Academic Discussion
Loading comments...
Leave a Comment