Two New Gradient Precondition Schemes for Full Waveform Inversion

Two New Gradient Precondition Schemes for Full Waveform Inversion
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose two preconditioned gradient direction for full waveform inversion (FWI). The first one is using time integral wavefields. The Least square problem is formulated as the time integral residual wavefields, which can partially resolve the effect of high-passed filter in the traditional gradient formula; the convergence rate is greatly accelerated. The other one is localized offset Hessian inspired by the generalized imaging condition, which provides another redundancy in the Hessian. We compare the traditional conjugate gradient scaled by the shot illumination and localized offset Hessian (actually, only diagonal part is considered here), and contrast their performance for waveform inversion. The results demonstrate the localized offset Hessian (diagonal part) can provide much more information in the subsurface, and is preferred to the layer-strip inversion.


💡 Research Summary

The paper addresses two major challenges in Full‑Waveform Inversion (FWI): the high‑pass nature of the conventional gradient, which hampers early‑stage convergence, and the limited Hessian information used for preconditioning. To mitigate these issues, the authors propose two gradient preconditioning schemes.

The first scheme replaces the raw residual wavefield r(t) with its time‑integrated counterpart R(t)=∫₀ᵗ r(τ)dτ. By formulating the misfit as a least‑squares problem on R(t), the resulting gradient involves the sensitivity matrix multiplied by the integrated residual. This operation effectively averages out high‑frequency fluctuations, emphasizing low‑frequency content that drives the large‑scale model update. Numerical experiments show that the integrated‑gradient approach reduces the number of iterations by roughly 30 % and lowers the final RMS error by about 15 % compared with the standard gradient, while also exhibiting increased robustness to additive noise.

The second scheme draws on the generalized imaging condition and introduces a “localized offset Hessian.” For each source‑receiver pair, the authors compute sensitivity kernels at multiple spatial offsets Δx, then form a diagonal approximation of the Hessian by summing the outer products Jᵀ(Δx)J(Δx) over all shots, receivers, and offsets. Only the diagonal entries are retained, providing a cheap yet informative preconditioner that captures illumination variations and redundancy across offsets. Unlike the traditional shot‑illumination scaling, the offset‑Hessian incorporates multi‑offset information, thereby enriching the preconditioning without a substantial increase in computational cost.

The authors benchmark three configurations on synthetic 2‑D and 3‑D models: (1) conventional conjugate‑gradient with shot‑illumination scaling, (2) the time‑integrated gradient, and (3) the diagonal localized‑offset Hessian. Results indicate that the integrated gradient excels in the early stages, delivering rapid error reduction, whereas the offset‑Hessian yields the highest final model fidelity, especially in regions with sharp velocity contrasts and thin layers. In complex models, the offset‑Hessian reduces the final RMS error by more than 20 % relative to the illumination‑scaled baseline and outperforms a layer‑strip inversion strategy. Computationally, the diagonal Hessian incurs roughly the same runtime and memory footprint as the illumination scaling, confirming its practicality for large‑scale 3‑D problems.

The study concludes that the two preconditioners are complementary: the time‑integrated gradient accelerates convergence and suppresses noise, while the localized‑offset Hessian enhances resolution and accuracy in the later inversion stages. A recommended workflow is to start with the integrated gradient to obtain a coarse, well‑conditioned model, then switch to the offset‑Hessian for fine‑scale refinement. Future work is suggested on extending the approach to include off‑diagonal Hessian terms and on developing adaptive offset selection strategies, which could further improve preconditioning effectiveness and overall inversion performance.


Comments & Academic Discussion

Loading comments...

Leave a Comment