Entropy sensitivity of languages defined by infinite automata, via Markov chains with forbidden transitions

Entropy sensitivity of languages defined by infinite automata, via   Markov chains with forbidden transitions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

A language L over a finite alphabet is growth-sensitive (or entropy sensitive) if forbidding any set of subwords F yields a sub-language L^F whose exponential growth rate (entropy) is smaller than that of L. Let (X, E, l) be an infinite, oriented, labelled graph. Considering the graph as an (infinite) automaton, we associate with any pair of vertices x,y in X the language consisting of all words that can be read as the labels along some path from x to y. Under suitable, general assumptions we prove that these languages are growth-sensitive. This is based on using Markov chains with forbidden transitions.


💡 Research Summary

The paper investigates the phenomenon of growth‑sensitivity (also called entropy‑sensitivity) for languages generated by infinite automata. A language L over a finite alphabet Σ is called growth‑sensitive if, after forbidding any non‑empty finite set F of subwords, the resulting sub‑language L F has a strictly smaller exponential growth rate (entropy) than L. The authors model an infinite automaton as a labelled, oriented graph (X,E,ℓ) where each edge carries a label from Σ. For any pair of vertices x,y∈X, the associated language L_{x,y} consists of all words that can be read along a directed path from x to y.

The main result states that under very general conditions—local finiteness (each vertex has finitely many outgoing edges), strong connectivity (every vertex can be reached from any other), and aperiodicity (the underlying directed graph has period 1)—the languages L_{x,y} are always growth‑sensitive. In other words, for every non‑empty finite set F of forbidden words, the entropy satisfies
 h(L_{x,y}^F) < h(L_{x,y}) for all x,y.

The proof hinges on a probabilistic reinterpretation of the graph as a Markov chain. By assigning uniform transition probabilities to outgoing edges, one obtains a stochastic matrix P whose spectral radius ρ(P) is positive because of the Perron‑Frobenius theorem. The authors show that the entropy of L_{x,y} equals log ρ(P). Introducing forbidden words corresponds to deleting all transitions that would generate any word from F; the resulting transition matrix is denoted P^F. The crucial technical step is to prove that ρ(P^F) < ρ(P). This is achieved by combining three ingredients:

  1. Subadditive ergodic theorem – applied to the logarithm of the number of admissible paths of length n, guaranteeing convergence to log ρ(P).
  2. Large‑deviation estimates – showing that the proportion of paths containing a forbidden pattern decays exponentially, which forces a strict drop in the spectral radius when those paths are removed.
  3. Perron‑Frobenius theory for non‑negative infinite matrices – providing a quantitative gap between ρ(P) and ρ(P^F) based on the minimal probability of encountering a forbidden transition.

With ρ(P^F) strictly smaller, the entropy of the restricted language follows immediately as h(L_{x,y}^F)=log ρ(P^F). The authors present a detailed four‑step proof: (i) asymptotic counting of unrestricted paths, (ii) construction of the forbidden‑transition matrix, (iii) estimation of the spectral gap, and (iv) translation of the spectral gap into an entropy inequality.

Beyond the abstract theorem, the paper discusses several concrete contexts where the result applies. In group theory, the Cayley graph of an infinite finitely generated group satisfies the required hypotheses; forbidding a word corresponds to excluding a particular group element from the generating set, and the theorem recovers known facts about the strict decrease of growth rate under such exclusions. In symbolic dynamics, subshifts of finite type can be represented by finite‑state graphs; extending to infinite graphs yields a natural generalization of the classical entropy‑decrease when additional forbidden blocks are added. The authors also compare their approach with earlier work on random walks with forbidden patterns, emphasizing that the spectral‑radius method gives a more direct and often sharper bound on entropy.

The paper concludes with suggestions for future research: (a) handling infinite but sparse forbidden sets, (b) relaxing the local‑finiteness assumption to allow countably infinite out‑degrees, and (c) incorporating weighted labels or non‑uniform transition probabilities, which would lead to a richer class of Markov chains and potentially new notions of entropy. Overall, the work provides a robust, probabilistic framework for understanding how even minimal restrictions on an infinite automaton inevitably reduce its combinatorial complexity, thereby extending the concept of growth‑sensitivity far beyond the realm of regular languages.


Comments & Academic Discussion

Loading comments...

Leave a Comment