Rank-1 Approximation of Inverse Fisher for Natural Policy Gradients in Deep Reinforcement Learning

Rank-1 Approximation of Inverse Fisher for Natural Policy Gradients in Deep Reinforcement Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Natural gradients have long been studied in deep reinforcement learning due to their fast convergence properties and covariant weight updates. However, computing natural gradients requires inversion of the Fisher Information Matrix (FIM) at each iteration, which is computationally prohibitive in nature. In this paper, we present an efficient and scalable natural policy optimization technique that leverages a rank-1 approximation to full inverse-FIM. We theoretically show that under certain conditions, a rank-1 approximation to inverse-FIM converges faster than policy gradients and, under some conditions, enjoys the same sample complexity as stochastic policy gradient methods. We benchmark our method on a diverse set of environments and show that it achieves superior performance to standard actor-critic and trust-region baselines.


💡 Research Summary

The paper tackles the long‑standing computational bottleneck of natural policy gradient (NPG) methods: the need to invert the Fisher Information Matrix (FIM) at every update. While prior work has relied on diagonal approximations, Kronecker‑factored (K‑FAC) schemes, or Hessian‑free conjugate‑gradient (CG) solvers, these approaches either sacrifice curvature information or still incur substantial memory and time costs, especially for deep neural‑network policies with millions of parameters.

The authors propose a radically simple yet theoretically grounded solution: use the empirical Fisher (EF) computed from a single sampled state‑action pair per minibatch, add a small damping term λI, and treat the resulting matrix as a rank‑1 update of a scaled identity. By applying the Sherman‑Morrison formula, the inverse of this matrix can be updated from the previous inverse using only three vector operations, yielding O(d) computational and memory complexity where d is the number of policy parameters. The resulting update direction is

Δθₖ = η (λI + gₖgₖᵀ)⁻¹ gₖ,

where gₖ = ∇θ log πθₖ(aₖ|sₖ) A(sₖ,aₖ) is the standard policy‑gradient estimate for the sampled transition, and η is a fixed step size. This “Sherman‑Morrison Actor‑Critic” (SMAC) algorithm replaces the costly CG or matrix‑factorization steps in conventional NPG with a single matrix‑vector product and an outer‑product, making it comparable in runtime to vanilla stochastic policy gradient methods while still incorporating curvature information.

Theoretical contributions include:

  1. A global convergence proof for the stochastic SMAC update under standard Lipschitz and bounded‑variance assumptions, extending the analysis of Agarwal et al. (2021) and Liu et al. (2022).
  2. A bound on the performance gap J(π*) − E

Comments & Academic Discussion

Loading comments...

Leave a Comment