Lattice Problems, Gauge Functions and Parameterized Algorithms
Given a k-dimensional subspace M\subseteq \R^n and a full rank integer lattice L\subseteq \R^n, the \emph{subspace avoiding problem} SAP is to find a shortest vector in L\setminus M. Treating k as a parameter, we obtain new parameterized approximation and exact algorithms for SAP based on the AKS sieving technique. More precisely, we give a randomized $(1+\epsilon)$-approximation algorithm for parameterized SAP that runs in time 2^{O(n)}.(1/\epsilon)^k, where the parameter k is the dimension of the subspace M. Thus, we obtain a 2^{O(n)} time algorithm for \epsilon=2^{-O(n/k)}. We also give a 2^{O(n+k\log k)} exact algorithm for the parameterized SAP for any \ell_p norm. Several of our algorithms work for all gauge functions as metric with some natural restrictions, in particular for all \ell_p norms. We also prove an \Omega(2^n) lower bound on the query complexity of AKS sieving based exact algorithms for SVP that accesses the gauge function as oracle.
💡 Research Summary
The paper studies the Subspace Avoiding Problem (SAP), a generalization of the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). An instance consists of a full‑rank integer lattice L ⊂ ℝⁿ and a k‑dimensional subspace M ⊂ ℝⁿ; the task is to find the shortest lattice vector that does not lie in M. By treating the subspace dimension k as a parameter, the authors develop both approximation and exact algorithms whose running times depend exponentially on k but only singly exponentially on the ambient dimension n.
The core technical tool is a parameterized version of the Ajtai‑Kumar‑Sivakumar (AKS) sieving procedure. The authors first extend the sieving lemma to any gauge function f that satisfies positivity, homogeneity, and the triangle inequality (so ℓₚ norms are special cases). Lemma 1 shows that given N points inside an f‑ball of radius r, a greedy greedy covering yields a subset S of size at most 5ⁿ such that every original point is within distance r/2 of some point in S. The proof relies on a volume‑packing argument that holds for any centrally symmetric convex body.
Using this generalized sieve, the authors design a randomized (1+ε)‑approximation algorithm for SAP. The algorithm proceeds as follows:
- Scale the lattice so that the unknown optimal vector v satisfies 2 ≤ f(v) ≤ 3.
- Define R = n·max_i‖b_i‖_f, where the b_i are the basis vectors of L.
- Sample N = Θ((n + k log(1/ε))·log R) points uniformly from the f‑ball B_f(0,2) using the Dyer‑Frieze‑Kannan sampler (which works for any nice gauge function).
- For each sample x_i, reduce it modulo the fundamental parallelepiped of L to obtain a point y_i inside the parallelepiped; the lattice difference d_i = x_i − y_i belongs to L.
- Apply the generalized sieve repeatedly (O(log R) rounds) to the set {y_i}. After each round the number of points shrinks by at most a factor 5ⁿ, while the distances between the corresponding lattice differences shrink by a factor of two. After the final round all differences lie inside B_f(0,8) and the set size remains at least 2^{c₁·(n + k log(1/ε))}.
The crucial observation is that each lattice difference lies in a coset of the sublattice L∩M, which is k‑dimensional. By partitioning the set of differences into these cosets and using a packing argument in the k‑dimensional subspace, the authors bound the number of distinct cosets by 2^{O(k log(1/ε))}. Consequently, among the remaining differences there must be a pair whose difference equals v plus a vector u with f(u) ≤ ε. This yields a (1+ε)‑approximate solution with success probability 1 − 2^{−Ω(n)}.
For exact solutions, the same sampling and sieving framework is used, but after the sieving phase the algorithm exhaustively searches over all k‑dimensional cosets. Since the number of cosets is 2^{O(k log k)}, the total running time becomes 2^{O(n + k log k)}. This algorithm works for any ℓₚ norm (p ≥ 1) and, more generally, for any “nice” gauge function that satisfies a mild containment condition between Euclidean balls and the f‑ball.
In addition to algorithmic results, the paper establishes a lower bound on the query complexity of AKS‑style exact SVP algorithms that treat the gauge function as an oracle: any such algorithm must make Ω(2ⁿ) oracle queries. This shows that the exponential dependence on n in AKS‑based exact algorithms is inherent.
The authors also introduce a parameterized complexity viewpoint for SVP, CVP, and SAP. By fixing k as the parameter and allowing the rest of the input to be arbitrarily large, they prove that parameterized SAP and CVP are W
Comments & Academic Discussion
Loading comments...
Leave a Comment