On the Convergence Rate of LoRA Gradient Descent
The low-rank adaptation (LoRA) algorithm for fine-tuning large models has grown popular in recent years due to its remarkable performance and low computational requirements. LoRA trains two adapter" matrices that form a low-rank representation of the model parameters, thereby massively reducing the number of parameters that need to be updated at every step. Although LoRA is simple, its convergence is poorly understood due to the lack of Lipschitz smoothness, a key condition for classic convergence analyses. As a result, current theoretical results only consider asymptotic behavior or assume strong boundedness conditions which artificially enforce Lipschitz smoothness. In this work, we provide for the first time a non-asymptotic convergence analysis of the \textit{original LoRA gradient descent} algorithm, which reflects widespread practice, without such assumptions. Our work relies on three key steps: i) reformulating the problem in terms of the outer product of the stacked adapter matrices, ii) a modified descent lemma for the Lipschitz-like" reparametrized function, and iii) controlling the step size. With this approach, we prove that LoRA gradient descent converges to a stationary point at rate $O(\frac{1}{\log T})$, where $T$ is the number of iterations. We conduct numerical experiments to validate our theoretical findings.
💡 Research Summary
The paper delivers the first non‑asymptotic convergence analysis of the original LoRA (Low‑Rank Adaptation) gradient descent algorithm, which is the de‑facto standard for fine‑tuning large pretrained models. While LoRA’s empirical success is well‑documented, its theoretical understanding has lagged because the re‑parameterization W = W₀ + B A makes the loss function non‑Lipschitz in the adapter matrices A and B, even when the underlying loss ℓ is smooth. Existing works either study infinite‑width limits, propose memory‑efficient variants, or impose strong boundedness assumptions on A and B that artificially restore Lipschitz smoothness.
The authors overcome these obstacles through three technical steps. First, they stack the two adapters into a single matrix V ∈ ℝ^{(m+n)×r} (V =
Comments & Academic Discussion
Loading comments...
Leave a Comment