Why Does Adaptive Zeroth-Order Optimization Work?
Zeroth-order (ZO) optimization is popular in real-world applications that accessing the gradient information is expensive or unavailable. Recently, adaptive ZO methods that normalize gradient estimators by the empirical standard deviation of function values have achieved strong practical performance, particularly in fine-tuning the large language model. However, the theoretical understanding of such strategy remains limited. In this work, we show that the empirical standard deviation is, with high probability, closely proportional to the norm of the (stochastic) gradient. Based on this insight, we analyze adaptive ZO methods under the generalized $(L_0,L_1)$-smoothness condition with respect to the matrix norm. We establish explicit convergence rates and query complexity bounds for both deterministic and stochastic settings, demonstrating that adaptive ZO methods achieve the faster convergence and the improved query efficiency compared to the vanilla ZO methods with fixed-step.
💡 Research Summary
The paper investigates the theoretical foundations of adaptive zeroth‑order (ZO) optimization, a class of gradient‑free methods that estimate gradients by querying the objective function at randomly perturbed points. In practice, many recent works (especially in large‑language‑model fine‑tuning) have observed that normalizing the gradient estimator by the empirical standard deviation of the sampled function values—i.e., updating with
(x_{t+1}=x_t-\eta, g(x_t)/\sigma_t)—dramatically improves query efficiency compared with the vanilla ZO scheme that uses a fixed step size. However, prior to this work, there was no rigorous explanation for why this “standard‑deviation normalization” is beneficial.
The authors first prove that the empirical standard deviation (\sigma_t) is, with high probability, tightly proportional to the norm of the (stochastic) true gradient (|\nabla f(x_t)|). They show that (\mathbb{E}
Comments & Academic Discussion
Loading comments...
Leave a Comment