Noise-Adaptive Layerwise Learning Rates: Accelerating Geometry-Aware Optimization for Deep Neural Network Training
Geometry-aware optimization algorithms, such as Muon, have achieved remarkable success in training deep neural networks (DNNs). These methods leverage the underlying geometry of DNNs by selecting appropriate norms for different layers and updating parameters via norm-constrained linear minimization oracles (LMOs). However, even within a group of layers associated with the same norm, the local curvature can be heterogeneous across layers and vary dynamically over the course of training. For example, recent work shows that sharpness varies substantially across transformer layers and throughout training, yet standard geometry-aware optimizers impose fixed learning rates to layers within the same group, which may be inefficient for DNN training. In this paper, we introduce a noise-adaptive layerwise learning rate scheme on top of geometry-aware optimization algorithms and substantially accelerate DNN training compared to methods that use fixed learning rates within each group. Our method estimates gradient variance in the dual norm induced by the chosen LMO on the fly, and uses it to assign time-varying noise-adaptive layerwise learning rates within each group. We provide a theoretical analysis showing that our algorithm achieves a sharp convergence rate. Empirical results on transformer architectures such as LLaMA and GPT demonstrate that our approach achieves faster convergence than state-of-the-art optimizers.
💡 Research Summary
The paper addresses a critical limitation of recent geometry‑aware optimizers such as Muon, Scion, and D‑Muon: they assign a single, fixed learning rate to all layers that share the same norm constraint, even though empirical evidence shows that gradient noise and local curvature vary dramatically across layers and over the course of training. To remedy this, the authors propose LANTON (Layer‑wise Noise‑adaptive learning rate scaling with Operator Norms), a method that augments any geometry‑aware optimizer with a per‑layer, time‑varying learning‑rate factor derived from an on‑the‑fly estimate of stochastic gradient variance measured in the dual norm induced by the layer’s LMO.
Algorithmic core.
At each iteration, for every layer ℓ the stochastic gradient Gℓ,t is computed and combined with a momentum buffer Bℓ,t (as in standard Muon/Scion). The direction Oℓ,t = LMO(Bℓ,t) is obtained using the norm appropriate for the layer’s group (e.g., RMS→RMS operator norm with nuclear‑norm dual for hidden‑matrix layers, ℓ₁→ℓ_∞ norm with ℓ₁→ℓ₁ dual for embedding/LM‑head matrices, RMS norm with ℓ₂ dual for normalization vectors). To capture noise, a second‑moment‑like buffer Hℓ,t is updated as
Hℓ,t = β₂ Hℓ,t‑1 + (1‑β₂) ‖Gℓ,t − ˜Gℓ,t‖_*²,
where ‖·‖_* denotes the dual norm. In practice the authors approximate the independent gradient ˜Gℓ,t by the previous step’s gradient Gℓ,t‑1, avoiding extra forward passes. The adaptive scaling factor is then
αℓ,t = α / √(α² + Hℓ,t),
and the effective learning rate for layer ℓ becomes
ηℓ,t = η_t · (αℓ,t / α_m,t),
with η_t following a cosine‑decay schedule and α_m,t = max_{j∈G_ℓ} α_j,t (the maximum scaling within the layer’s group). Consequently, layers with larger estimated noise receive smaller steps, while quieter layers can take larger steps, aligning with classical stochastic optimization intuition.
Theoretical contribution.
Under standard assumptions (smooth, bounded‑below objective; bounded variance in the dual norm), the authors prove that LANTON attains a convergence rate of
˜O(1/√T + p·P_ℓ·σ̄_ℓ / T^{1/4})
for the dual‑norm gradient norm, where p is the number of parameter groups, P_ℓ reflects the relative dimension of group ℓ, and σ̄_ℓ is an upper bound on the noise of layer ℓ. Compared to the ˜O(1/√T) rate of existing geometry‑aware methods, the additional T^{‑1/4} term quantifies the benefit of exploiting layer‑wise noise heterogeneity.
Empirical validation.
The method is evaluated on large language models (LLaMA‑7B, GPT‑2‑XL) and on vision models (ResNet‑50). In all settings, LANTON matches or exceeds the baseline optimizers in terms of loss reduction per epoch, final perplexity (language) or top‑1 accuracy (vision), and sample efficiency. Notably, during early training phases where Q‑K attention matrices exhibit high gradient variance, LANTON automatically reduces their learning rates, preventing instability and accelerating convergence. The extra computational overhead is modest: the variance buffer requires only a few vector norm calculations and EMA updates, adding less than 5 % to overall runtime and memory usage.
Implementation details.
The algorithm reuses the existing LMO implementations (Newton‑Schulz for low‑rank matrix updates, sign‑operator for embedding matrices, RMS normalization for vectors). The dual‑norm noise estimator is computed in the same space as the LMO, ensuring theoretical consistency. Two options for noise estimation are discussed: (I) using the previous gradient (practical, no extra batch), and (II) using an independent gradient (theoretically cleaner). Experiments show that option I suffices in practice.
Impact and future work.
LANTON demonstrates that integrating noise adaptivity into geometry‑aware optimization yields tangible speed‑ups and stability gains for training modern, large‑scale DNNs. The authors suggest extensions such as richer dual‑norm families, asynchronous distributed variance estimation, and application to non‑matrix architectures (e.g., graph neural networks). Overall, the paper provides a compelling blend of theory, algorithmic design, and empirical evidence, positioning LANTON as a strong candidate for next‑generation optimizers in foundation‑model pretraining.
Comments & Academic Discussion
Loading comments...
Leave a Comment