On the (Classical and Quantum) Fine-Grained Complexity of Approximate CVP and Max-Cut

On the (Classical and Quantum) Fine-Grained Complexity of Approximate CVP and Max-Cut
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We show a linear-size reduction from gap Max-2-Lin(2) (a generalization of the gap $\mathrm{Max}$-$\mathrm{Cut}$ problem) to $γ\text{-}\mathrm{CVP}_p$ for $γ= \mathrm{O}(1)$ and finite $p\geq 1$, as well as a no-go theorem against poly-sized non-adaptive quantum reductions from $k$-SAT to $\mathrm{CVP}_2$. This implies three headline results: (i) Faster algorithms for $γ\text{-}\mathrm{CVP}$ are also faster algorithms for Max-2-Lin(2) and Max-Cut. Depending on the approximation regime, even a $2^{0.78n}$-time or $2^{0.3n}$-time algorithm would improve upon the state-of-the-art algorithm such as Williams’ 2004 algorithm [Theoretical Computer Science 2005] or Arora et al.’s 2010 algorithm [Journal of the ACM 2015]. This provides evidence that $γ\text{-}\mathrm{CVP}$ for $γ=\mathrm{O}(1)$ requires exponential time, improving upon the previous lower-bound for $γ<3$ by Bennett et al. [arxiv:1704.03928]. (ii) A new almost $2^{(1/2+\varepsilon/4ς+o(1))n}$-time classical algorithm and a new almost $2^{(1/3+\varepsilon/6ς+o(1))n}$-time quantum algorithm for $(1-\varepsilon,1-ς)$-gap Max-2-Lin(2). This algorithm is faster than the algorithm of Arora et al., as well as the algorithm of Williams, and the algorithm of Manurangsi and Trevisan [arxiv:1807.09898] when $c_0 \varepsilon<ς<c_1 \varepsilon$ for some constants $c_0, c_1$. (iii) If the Quantum Strong Exponential Time Hypothesis (QSETH) can be used to show a $2^{δn}$-time lower-bound for Max-Cut, Max-2-Lin(2), or $\mathrm{CVP}_2$ for any constant $δ>0$, it must be via an adaptive quantum reduction unless $\mathrm{NP} \subseteq \mathrm{pr}\text{-}\mathrm{QSZK}$. This illuminates some difficulties in characterizing the hardness of approximate CSPs and shows that the post-quantum security of lattice-based cryptography likely cannot be supported by QSETH.


💡 Research Summary

The paper investigates the fine‑grained complexity of the approximate Closest Vector Problem (γ‑CVPₚ) for constant approximation factors γ=O(1) and finite ℓₚ norms (p≥1), and its relationship to the well‑studied constraint satisfaction problem Max‑2‑Lin(2), which includes the approximate Max‑Cut problem as a special case. The authors present a linear‑size reduction from the gap version of Max‑2‑Lin(2) – specifically the (1‑ε, 1‑ς)‑gap version – to γ‑CVPₚ. The reduction runs in linear time (both classically and quantumly) and maps an instance with n variables to a lattice instance with exactly n basis vectors, using at most one auxiliary vector per variable. Crucially, the approximation factor γ of the resulting CVP instance is p^{ς/ε}, which can be made arbitrarily large as the gap ς/ε grows, thereby removing the previous restriction that reductions could only produce CVP instances with γ<3. This “small” reduction establishes a tight relationship: any algorithm for γ‑CVPₚ with running time f(n) immediately yields an algorithm for the gap Max‑2‑Lin(2) problem with the same running time, and vice‑versa.

Leveraging this reduction, the authors derive three major consequences. First, they obtain new conditional lower bounds for γ‑CVPₚ with γ=O(1). If one believes that solving the gap Max‑2‑Lin(2) problem requires 2^{δn} time for some constant δ>0, then the same exponential lower bound holds for γ‑CVPₚ. This improves upon earlier lower bounds that only covered γ<3 (Bennett et al., 2017). Second, they design faster algorithms for the gap Max‑2‑Lin(2) problem. A classical algorithm runs in O(2^{(½+ε/(4ς)+o(1))n}) time, beating the best known exact algorithm of Williams (2004) and the approximation algorithm of Arora, Barak, and Steurer (2015). A quantum algorithm runs in O(2^{(⅓+ε/(6ς)+o(1))n}) time, surpassing Grover’s search bound for this problem. Both algorithms are particularly advantageous when the gap parameters satisfy c₀ε < ς < c₁ε for some constants c₀, c₁, a regime where previous algorithms were suboptimal.

Third, the paper establishes a “no‑go” theorem for quantum reductions from k‑SAT to CVP₂. It shows that any polynomial‑size, non‑adaptive quantum reduction would imply NP ⊆ pr‑QSZK, a collapse considered highly unlikely. Consequently, any QSETH‑based exponential‑time lower bound for Max‑Cut, Max‑2‑Lin(2), or CVP₂ must rely on adaptive quantum reductions. This mirrors recent classical no‑go results (Aggarwal & Kumar, 2023) and suggests that the post‑quantum security of lattice‑based cryptography cannot be grounded on QSETH alone.

The paper also discusses why reductions that preserve a linear number of basis vectors are essential for fine‑grained hardness, reviews prior work on reductions to γ‑CVPₚ (including those limited to γ<3 or to non‑even p), and explains the technical obstacles that prevented earlier constructions from achieving arbitrary γ with linear size. It concludes with open problems such as extending linear‑size reductions to larger γ or even p, clarifying the exact role of adaptivity in quantum reductions, and improving quantum algorithms for Max‑2‑Lin(2). Overall, the work tightly connects the exponential‑time complexity of approximate lattice problems with that of classic CSPs, provides new algorithmic upper bounds, and delineates fundamental barriers for both classical and quantum fine‑grained reductions.


Comments & Academic Discussion

Loading comments...

Leave a Comment