Finding Differentially Private Second Order Stationary Points in Stochastic Minimax Optimization
We provide the first study of the problem of finding differentially private (DP) second-order stationary points (SOSP) in stochastic (non-convex) minimax optimization. Existing literature either focuses only on first-order stationary points for minimax problems or on SOSP for classical stochastic minimization problems. This work provides, for the first time, a unified and detailed treatment of both empirical and population risks. Specifically, we propose a purely first-order method that combines a nested gradient descent–ascent scheme with SPIDER-style variance reduction and Gaussian perturbations to ensure privacy. A key technical device is a block-wise ($q$-period) analysis that controls the accumulation of stochastic variance and privacy noise without summing over the full iteration horizon, yielding a unified treatment of both empirical-risk and population formulations. Under standard smoothness, Hessian-Lipschitzness, and strong concavity assumptions, we establish high-probability guarantees for reaching an $(α,\sqrt{ρ_Φα})$-approximate second-order stationary point with $α= \mathcal{O}( (\frac{\sqrt{d}}{n\varepsilon})^{2/3})$ for empirical risk objectives and $\mathcal{O}(\frac{1}{n^{1/3}} + (\frac{\sqrt{d}}{n\varepsilon})^{1/2})$ for population objectives, matching the best known rates for private first-order stationarity.
💡 Research Summary
This paper initiates the study of differentially private (DP) second‑order stationary points (SOSP) for stochastic non‑convex–strongly‑concave minimax optimization. While prior work either addresses DP first‑order stationarity in min‑max settings or DP‑SOSP for single‑level stochastic minimization, no algorithm has tackled both the hierarchical structure and second‑order guarantees under privacy constraints. The authors propose a purely first‑order method, DP‑Recursive Gradient Descent Ascent (DP‑RGDA), that integrates a nested gradient descent–ascent scheme with SPIDER‑style variance reduction and Gaussian perturbations for privacy.
The problem is to find an α‑SOSP of the value function Φ(x)=max_y f(x,y), where f(x,y)=E
Comments & Academic Discussion
Loading comments...
Leave a Comment