$2^{log^{1-eps} n}$ Hardness for Closest Vector Problem with Preprocessing
We prove that for an arbitrarily small constant $\eps>0,$ assuming NP$\not \subseteq$DTIME$(2^{{\log^{O(1/\eps)} n}})$, the preprocessing versions of the closest vector problem and the nearest codeword problem are hard to approximate within a factor better than $2^{\log ^{1-\eps}n}.$ This improves upon the previous hardness factor of $(\log n)^\delta$ for some $\delta > 0$ due to \cite{AKKV05}.
💡 Research Summary
The paper establishes a quasi‑polynomial hardness of approximation for the preprocessing versions of the Closest Vector Problem (CVPP) and the Nearest Codeword Problem (NCPP). Assuming the complexity-theoretic hypothesis that NP is not contained in DTIME(2^{log^{O(1/ε)} n}) for any arbitrarily small constant ε > 0, the authors prove that both CVPP and NCPP cannot be approximated within a factor better than 2^{log^{1‑ε} n}. This improves dramatically over the previous best hardness factor of (log n)^δ (for some constant δ > 0) obtained by Aharonov, Khot, Kindler, and Vishnoi (AKKV05).
The reduction starts from a preprocessing version of a quadratic constraint satisfaction problem over a finite field F_q, denoted F_q‑QCSPP. An instance consists of n variables and poly(n) homogeneous degree‑2 equations; the preprocessing model allows arbitrary pre‑computation on the equations before the right‑hand sides are revealed. The authors first boost the soundness of this instance using Reed‑Muller codes: they replace the original poly(n) equations with q = n·log^{O(1/ε)} n equations while keeping the same variable set. In the resulting instance, a perfectly satisfiable formula remains perfectly satisfiable, but any assignment satisfies at most a 1/q fraction of the equations.
Next, they construct a probabilistically checkable proof (PCP) that verifies assignments to the boosted QCSPP instance. The PCP combines two classic tools:
-
Low‑Degree Test (Arora‑Sudan points‑vs‑lines): the assignment is encoded as a degree‑m (m = log n) polynomial over F_{q}^m. The test checks that the claimed function agrees with a low‑degree polynomial on random affine lines. Crucially, the test works even when the prover’s success probability is as low as 1/q^{e} for some constant e > 0, enabling list decoding.
-
Sum‑Check Protocol (LFKN): after decoding a candidate low‑degree polynomial, the protocol verifies that the polynomial satisfies all quadratic equations by checking a weighted sum over the Boolean hypercube {0,1}^m. This requires only O(log n) queries.
The combined PCP has O(log n) layers, soundness 1/q^{f} (for a constant f > 0), and, because of the low‑degree encoding, it automatically satisfies a strong smoothness property: any two distinct labels for a vertex map to different labels of a neighboring vertex with probability at least 1 − δ, where δ = 1/q^{Θ(1)}.
From this PCP the authors derive an instance of Hypergraph LABEL COVER with Preprocessing (HLCPP). Unlike the standard LABEL COVER, HLCPP features a multilayered hypergraph, hyper‑edges that enforce the conjunction of several edge constraints, and many‑to‑many constraints rather than projections. The smoothness and soundness parameters of the PCP translate directly into δ‑smoothness and soundness s = 1/q^{Θ(1)} for the HLCPP instance, while the size of the instance remains q^{O(m)} = q^{O(log n)}.
The final reduction follows the framework of AKKV05: HLCPP is reduced to the Minimum Weight Solution Problem with Preprocessing (MWSPP), which is essentially the same as NCPP with preprocessing. In MWSPP the algorithm may preprocess the fixed linear system B_f x = t; only the target vector t arrives online, and the goal is to find a solution minimizing the Hamming weight of B_v x. The reduction preserves the hardness factor: a factor‑C hardness for MWSPP (or NCPP) yields a factor‑C^{1/p} hardness for CVPP under the ℓ_p norm (1 ≤ p < ∞), as shown by Feige and Micciancio (FM04).
Putting all pieces together, the hardness factor obtained is q^{1/m} ≈ 2^{log^{1‑ε} n}. Since the reduction incurs only quasi‑polynomial blow‑up, the final statement is that, under the stated complexity assumption, no polynomial‑time algorithm can approximate CVPP or NCPP within a factor better than 2^{log^{1‑ε} n}. This matches, up to sub‑polynomial factors, the best known approximation algorithms for CVPP (which achieve O(p·n/ log n) ≈ 2^{O(log n)}), thereby narrowing the gap between algorithmic upper bounds and hardness lower bounds.
The paper’s contribution is twofold: it introduces a novel combination of low‑degree testing and sum‑check to obtain extremely smooth and sound label‑cover instances, and it leverages this construction to push the hardness of preprocessing CVPP/NCPP far beyond the previously known logarithmic barriers. The result relies on a relatively strong assumption (NP ⊈ DTIME(2^{log^{O(1/ε)} n})), leaving open the question of whether similar hardness can be proved under weaker hypotheses such as the Exponential Time Hypothesis. Nonetheless, the techniques provide a powerful template for future hardness‑of‑approximation work on lattice‑based and coding‑theoretic problems with preprocessing.
Comments & Academic Discussion
Loading comments...
Leave a Comment