Finite state verifiers with constant randomness
We give a new characterization of $\mathsf{NL}$ as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.
💡 Research Summary
The paper revisits the nondeterministic logarithmic‑space class NL from a fresh computational perspective. Traditionally NL is described either as the set of languages accepted by a nondeterministic Turing machine using O(log n) space, or as the set of languages for which a deterministic log‑space verifier can check a polynomial‑size certificate in polynomial time. The authors replace the log‑space verifier with an extremely weak model: a finite‑state machine (hence constant memory) that is allowed to toss only a constant number of random bits. They call this model a “constant‑randomness finite‑state verifier” (CRFA‑V). A language is CRFA‑V‑verifiable if there exists a certificate such that the verifier, using at most a constant number of random bits, runs in polynomial time and accepts correct certificates with probability at least 1 − ε while rejecting false ones with probability at least ½ + ε for some ε < ½.
The central theorem is that NL coincides exactly with the class of CRFA‑V‑verifiable languages. To prove NL ⊆ CRFA‑V, the authors encode a nondeterministic log‑space computation as a certificate that records the sequence of configurations together with local consistency checks. The verifier samples a constant‑size set of positions in the certificate, chosen by its few random bits, and checks that each sampled fragment satisfies the local transition rules. By augmenting the certificate with parity checksums and explicit position encodings, a constant‑size random sample suffices to guarantee, with high probability, that the whole certificate is globally consistent. Thus any NL language admits a constant‑randomness, constant‑memory verifier with arbitrarily small error.
Conversely, any language that a CRFA‑V can verify must be in NL. The authors show how to simulate the verifier by a nondeterministic log‑space machine: the machine guesses the random bits, reads the same polynomial‑size certificate, and performs the same local checks. Because the verifier’s memory is constant, the simulation needs only O(log n) space to keep track of the current position in the certificate and the outcome of the random bits, establishing CRFA‑V ⊆ NL.
The paper then extends the model to allow two‑way interaction between prover and verifier (CRFA‑V‑2). The verifier may ask questions and receive answers, still using only a constant number of random bits. Remarkably, this additional interaction does not enlarge the class of verifiable languages; CRFA‑V‑2 also characterizes NL. The proof relies on the observation that with constant randomness the verifier cannot extract more than a constant amount of information from the prover, so any interactive protocol can be “flattened” into a non‑interactive certificate that the original CRFA‑V can check.
A negative result follows: increasing the amount of randomness to polynomial (or even linear) while keeping memory constant does not increase power. For any polynomial‑bounded function r(n), the class of languages verifiable by a constant‑memory machine with r(n) random bits remains NL. The same limitation holds for public‑coin verifiers. This demonstrates a sharp collapse: randomness is useless for constant‑memory recognizers beyond a constant budget.
Finally, the authors connect their characterization to incomplete‑information games. They consider two‑player games where the universal player is restricted to O(log n) space and is allowed only a constant number of moves. The “outcome problem” – deciding which player has a winning strategy – is shown to be NL‑complete. The reduction uses the CRFA‑V certificate to encode the game’s configuration graph; the verifier’s constant‑randomness checks correspond to verifying that the universal player’s limited moves lead to a win. Hence the same computational power that captures NL also captures the decision problems of these restricted games.
The paper is organized as follows. Section 1 motivates the study of randomness and memory together and surveys related work on space‑bounded verifiers and interactive proofs. Section 2 formally defines CRFA‑V, CRFA‑V‑2, and the notion of ε‑error verification. Section 3 proves the main equivalence NL = CRFA‑V, first constructing certificates for NL languages and then simulating any CRFA‑V by a nondeterministic log‑space machine. Section 4 treats the interactive extension and shows its equivalence to the non‑interactive model. Section 5 establishes the randomness‑collapse theorem, and Section 6 applies the framework to O(log n)‑space incomplete‑information games, proving NL‑completeness of the outcome problem. An appendix supplies the detailed construction of parity checksums, position encodings, and the error‑reduction technique based on repeated constant‑size sampling.
In summary, the authors provide a novel structural characterization of NL: it is exactly the set of languages that admit polynomial‑time verification by a finite‑state machine using only a constant number of random bits. This result unifies three strands—randomness, memory limitation, and interaction—showing that, at the extreme of constant memory, even a handful of random bits suffice to capture the full power of nondeterministic logarithmic space, while any additional randomness or interaction yields no extra strength. The work opens new avenues for exploring other complexity classes under similar combined resource constraints.
Comments & Academic Discussion
Loading comments...
Leave a Comment