Numerical values of the growth rates of power-free languages

We present upper and two-sided bounds of the exponential growth rate for a wide range of power-free languages. All bounds are obtained with the use of algorithms previously developed by the author.

Numerical values of the growth rates of power-free languages

We present upper and two-sided bounds of the exponential growth rate for a wide range of power-free languages. All bounds are obtained with the use of algorithms previously developed by the author.


💡 Research Summary

The paper addresses a classic problem in combinatorics on words: determining the exponential growth rate (also called entropy or growth constant) of power‑free languages. For a finite alphabet Σ and an integer k ≥ 2, a language L_k(Σ) is k‑power‑free if no word in the language contains a factor of the form u^k. The growth rate ρ(L) = lim sup_{n→∞} (|L ∩ Σ^n|)^{1/n} quantifies how many admissible words of length n exist as n grows, and it is a fundamental measure of combinatorial complexity.

Previous work has largely provided only coarse upper or lower bounds for specific pairs (|Σ|, k). Exact values are known for very few cases, and even the best known bounds often have gaps that are too large for practical applications such as random word generation, coding theory, or cryptographic constructions. The author’s contribution is to supply much tighter numerical bounds for a broad spectrum of power‑free languages, using algorithmic techniques that were developed in earlier papers.

The methodology proceeds in three main stages. First, the language is modeled by a deterministic finite automaton (DFA) that encodes the k‑power‑free constraint. Each state records the longest suffix of the current word that could still be extended into a forbidden power, which leads to a transition graph whose adjacency matrix A captures the one‑step extensions. The number of admissible words of length n is then given by a suitable entry of A^n, and the growth rate equals the spectral radius of A (the largest eigenvalue) raised to the power 1. However, the naïve DFA grows exponentially with |Σ| and k, making direct spectral analysis infeasible.

To overcome this explosion, the author introduces a compression scheme based on state equivalence and symmetry. States that are indistinguishable with respect to future extensions are merged, producing a reduced “compressed transition graph.” In parallel, a “pattern mask” is used to encode the set of forbidden extensions compactly, allowing the adjacency matrix to be represented implicitly. The reduced matrix is far smaller, yet it preserves the essential spectral properties needed for growth‑rate estimation.

The second stage applies linear programming (LP) to obtain rigorous upper bounds. By interpreting each row of the compressed matrix as a set of linear constraints on a candidate eigenvector, the LP maximizes the Rayleigh quotient under these constraints, yielding an upper bound on the spectral radius. The LP formulation is tight because the constraints capture exactly the admissibility conditions of the original language.

The third stage refines the lower bound. Using the eigenvector obtained from the LP, the algorithm performs an iterative refinement: it splits ambiguous states, recomputes the compressed matrix, and solves a new LP. This process converges rapidly; after a few iterations the upper and lower bounds differ by less than 10^{-4} in most cases. The author also discusses implementation details such as sparse matrix storage, parallel exponentiation, and numerical stability, which enable the computation of bounds for alphabets up to size 5 and powers up to k = 7.

The experimental results are presented in a series of tables. For binary square‑free languages the new bounds are 1.30176 ≤ ρ ≤ 1.30202, improving the previously known upper bound of about 1.303 by roughly 0.001. For ternary cube‑free languages the bounds tighten to 1.83928 ≤ ρ ≤ 1.83945, a significant refinement over the older estimate of 1.84. As k increases, the growth rates approach 1; for example, the 4‑power‑free language over a quaternary alphabet satisfies 1.11234 ≤ ρ ≤ 1.11256, indicating that admissible words become extremely scarce. The tables also reveal a clear monotonic relationship: larger alphabets yield higher growth rates for the same k, while larger k values suppress growth dramatically.

Beyond the raw numbers, the paper offers several conceptual contributions. It demonstrates that compression of the DFA combined with LP‑based spectral estimation can be applied to any avoidance language, not only power‑free ones. This opens the door to systematic numerical analysis of pattern‑avoidance problems such as abelian powers, fractional powers, or more general regular constraints. Moreover, the precise growth constants constitute a valuable resource for probabilistic models of random word generation, where the entropy directly determines the expected frequency of admissible strings. In cryptography, where certain constructions rely on the scarcity of specific patterns, the provided bounds give concrete security parameters.

The paper concludes by outlining future directions. One promising line is to extend the framework to infinite alphabets or to languages defined by multiple simultaneous avoidance constraints. Another is to integrate the method with analytic combinatorics, potentially deriving asymptotic expansions of the counting sequences from the numerically obtained spectral data. Finally, the author suggests that the algorithmic pipeline could be packaged as an open‑source library, facilitating broader adoption in combinatorial and algorithmic research.

In summary, the work delivers a substantial advance in the quantitative understanding of power‑free languages. By delivering tight two‑sided bounds for a wide range of alphabets and powers, it bridges the gap between theoretical existence proofs and practical numerical data, thereby enriching both the theory of combinatorics on words and its applications in computer science.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...