Derandomized Parallel Repetition via Structured PCPs

Derandomized Parallel Repetition via Structured PCPs
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with sub-constant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have sub-constant soundness error. Our construction can be viewed as a combinatorial alternative to the “manifold vs. point” construction, which is the only construction in the literature for this parameter range. The “manifold vs. point” PCP is based on a low degree test, while our construction is based on a direct product test. We also extend our construction to yield a decodable PCP (dPCP) with the same parameters. By plugging in this dPCP into the scheme of Dinur and Harsha (FOCS 2009) one gets an alternative construction of the result of Moshkovitz and Raz (FOCS 2008), namely: a construction of two-query PCPs with small soundness error and small alphabet size. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a de-Bruijn graph.


💡 Research Summary

The paper tackles a central challenge in probabilistically checkable proofs (PCPs): achieving sub‑constant soundness error while limiting the verifier to only two queries and keeping the alphabet size small. Historically, the only known construction meeting these parameters was the “manifold vs. point” PCP, which relies on low‑degree testing. The authors present a completely different combinatorial approach based on a direct‑product test, and they extend it to a decodable PCP (dPCP) with identical parameters.

The technical core consists of two steps. First, the authors prove a derandomized parallel‑repetition theorem for a special class of PCPs that have a highly regular structure. This theorem is built on the derandomized direct‑product test of Impagliazzo, Kabanets, and Wigderson (STOC 2009). Instead of using fully independent random queries, the test samples queries along paths in a de‑Bruijn graph. Because de‑Bruijn graphs have excellent expansion and a deterministic routing property, a small set of such paths provides enough “diversity” to simulate the effect of many independent repetitions. The analysis shows that if the original PCP has soundness ε, after k‑fold derandomized repetition the soundness drops to ε′ = ε^{Ω(k)} while the verifier still reads only two positions.

The second step shows that any PCP can be transformed into one that fits the required regular structure. The transformation embeds the original proof onto the vertices of a de‑Bruijn graph, assigning each vertex a short local view of the proof. The verifier’s two queries correspond to two adjacent vertices; checking consistency between their local views enforces the global constraints of the original PCP. This embedding preserves the alphabet size up to a polynomial factor and increases the proof length only by a logarithmic factor, which is acceptable for the intended applications.

Combining the two steps yields a two‑query PCP with sub‑constant soundness error and a small alphabet. Moreover, because the underlying test is a direct‑product test, the construction naturally extends to a decodable PCP. In a dPCP the verifier can not only check correctness but also recover the original proof symbols from a few queried locations. By plugging this dPCP into the Dinur‑Harsha composition framework (FOCS 2009), the authors obtain an alternative proof of the Moshkovitz‑Raz result (FOCS 2008) without resorting to low‑degree testing.

The paper provides a detailed technical development: (i) a precise definition of the structured PCP model and its embedding into de‑Bruijn graphs; (ii) the derandomized direct‑product test, including the sampling scheme, completeness, and soundness analysis; (iii) the parallel‑repetition theorem, proving exponential decay of error with only O(1) queries; (iv) the transformation from arbitrary PCPs to the structured form, with bounds on alphabet blow‑up and proof length; (v) the construction of the decodable PCP and its integration into the Dinur‑Harsha composition. Throughout, the authors compare their parameters with those of the manifold‑vs‑point construction, highlighting that their method achieves comparable (or better) soundness‑error versus alphabet‑size trade‑offs while using purely combinatorial tools.

In conclusion, this work introduces a new paradigm for reducing PCP soundness error: derandomized parallel repetition via structured direct‑product testing on de‑Bruijn graphs. It eliminates the need for algebraic low‑degree tests, reduces randomness consumption, and maintains a compact alphabet, thereby broadening the toolkit for hardness‑of‑approximation reductions. The techniques are likely to be applicable beyond the specific setting studied, for instance in constructing locally testable codes, robust PCPs, and other derandomization contexts. Future directions include optimizing the graph parameters, extending the approach to more than two queries, and exploring connections with high‑dimensional expanders.


Comments & Academic Discussion

Loading comments...

Leave a Comment