Beyond Worst-Case Analysis in Private Singular Vector Computation
We consider differentially private approximate singular vector computation. Known worst-case lower bounds show that the error of any differentially private algorithm must scale polynomially with the dimension of the singular vector. We are able to replace this dependence on the dimension by a natural parameter known as the coherence of the matrix that is often observed to be significantly smaller than the dimension both theoretically and empirically. We also prove a matching lower bound showing that our guarantee is nearly optimal for every setting of the coherence parameter. Notably, we achieve our bounds by giving a robust analysis of the well-known power iteration algorithm, which may be of independent interest. Our algorithm also leads to improvements in worst-case settings and to better low-rank approximations in the spectral norm.
💡 Research Summary
The paper tackles the problem of computing an approximate leading singular vector (or the top‑k singular vectors) of a matrix under differential privacy constraints. Classical worst‑case lower bounds dictate that any differentially private algorithm must incur an error that grows polynomially with the ambient dimension d, rendering such methods impractical for high‑dimensional data. The authors replace this dimension‑dependent term with a structural parameter called the coherence μ of the matrix, which measures how aligned the singular vectors are with the standard basis. In many real‑world datasets, μ is dramatically smaller than d—often constant—so an error bound that scales with μ can be substantially tighter.
The core algorithm is a privacy‑preserving version of the classic power iteration. Starting from a random unit vector, each iteration multiplies by the data matrix A, normalizes, and adds Gaussian noise calibrated to the sensitivity of the iteration step. The novelty lies in a robust, fine‑grained analysis that ties the accumulated noise error directly to μ rather than d. By modeling the iterative process as a Markov chain and applying advanced composition theorems, the authors show that after a modest number of iterations the output vector (\tilde{v}) satisfies
\