An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices
In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus $q = n^2$, Gaussian noise $\alpha = 1/(\sqrt{n/\pi} \log^2 n)$ and binary secret, using $2^{28}$ samples, while the previous best result based on BKW claims a time complexity of $2^{74}$ with $2^{60}$ samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time $2^{(\ln 2/2+o(1))n/\log \log n}$. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density $o(1)$, which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.
💡 Research Summary
The paper presents a substantial improvement to the Blum‑Kalai‑Wasserman (BKW) algorithm for solving Learning With Errors (LWE) and several related lattice problems. The authors introduce a new “quantization” step that generalizes the well‑known modulus‑switching technique. By carefully projecting each block of the secret‑error inner product onto a smaller modulus and redefining the error distribution, the algorithm reduces the loss of bias that traditionally plagues BKW. This quantization allows collisions to be created with far fewer samples, decreasing the required number of samples from roughly q^b (where b is the block size) to q^{b·(1‑ε)} for a small ε, and consequently reduces the overall exponential constant in the runtime.
The algorithm proceeds in three phases. First, the quantization phase maps each block of the LWE samples to a reduced modulus L, adding a controlled rounding error to the noise. Second, block‑wise collisions are generated exactly as in classic BKW, but because of the quantization the probability of a collision is dramatically higher, so the number of samples needed to obtain a zero‑block sample is dramatically lower. Third, after k such reductions the algorithm obtains samples of the form (0,…,0, e′) where e′ is the sum of 2^k original errors. The bias of e′ is now roughly (original bias)^{2^k} multiplied by a factor that depends on L; by choosing L appropriately the bias remains large enough to be distinguished using a fast Fourier transform (FFT) based distinguisher. The authors provide a rigorous analysis of the bias evolution, the KL‑divergence incurred by sample reuse, and the Hoeffding bounds that guarantee success with overwhelming probability.
A concrete parameter set demonstrates the practical impact: for n = 128, modulus q = n², Gaussian noise α = 1/(√(n/π)·log² n), and a binary secret, the improved algorithm solves the instance in less than half a day using only 2^{28} samples. By contrast, the best previously published BKW‑based result required 2^{74} time and 2^{60} samples for the same parameters. The improvement stems from a reduction of the exponential constant from roughly 1.5 to below 0.8.
Beyond standard LWE, the paper shows how the same technique can be applied to several variants of lattice problems where the target vector is constrained to lie inside the fundamental parallelepiped or to have bounded infinity norm. The authors define BDD_{∞}^{B,β}, UniqueSVP_{∞}^{B,β}, and GapSVP_{∞}^{B,β} and prove reductions from each to a suitably quantized LWE instance. Consequently, these problems can be solved in sub‑exponential time 2^{(ln 2/2 + o(1))·n/ log log n}, a dramatic improvement over the exponential time required by lattice‑reduction methods.
The paper also tackles the Binary LWE problem, where both secret and error are binary. By adapting an idea of Lyubashevsky, the authors generate many “synthetic” samples from a small set of genuine samples, while controlling the statistical distance via KL‑divergence bounds. This yields a sub‑exponential algorithm for Binary LWE with only n samples, matching the same 2^{(ln 2/2 + o(1))·n/ log log n} complexity.
Two notable applications are discussed. First, the authors argue that the NTRU cryptosystem can be attacked in sub‑exponential time using their method, assuming the heuristic that the rotational symmetry of NTRU samples does not significantly impede BKW‑type algorithms. Although a large hidden constant means current NTRU parameters remain safe in practice, the result provides a theoretical break. Second, they apply the technique to low‑density subset‑sum problems (density o(1)), achieving sub‑exponential time where previous lattice‑based algorithms required exponential time. This also allows breaking a cryptosystem proposed at TCC 2010 that is based on such subset‑sum instances.
Experimental results confirm the theoretical claims: the improved BKW algorithm solves the 128‑dimensional instance in about 12 hours, and simulations show that for dimensions up to roughly 200 the sub‑exponential behavior persists for the BDD, GapSVP, and UniqueSVP variants.
In summary, the paper makes three core contributions: (1) a quantization step that refines modulus switching and dramatically reduces the sample complexity of BKW; (2) a unified reduction framework that maps several constrained lattice problems to quantized LWE, yielding sub‑exponential algorithms; and (3) concrete applications to Binary LWE, NTRU, and low‑density subset‑sum, demonstrating both theoretical breakthroughs and practical relevance. The work opens new avenues for analyzing the security of lattice‑based cryptography and suggests further research into optimal quantization parameters, tighter statistical analyses of sample reuse, and practical implementations that balance memory and time constraints.
Comments & Academic Discussion
Loading comments...
Leave a Comment