Enforcing Reciprocity in Operator Learning for Seismic Wave Propagation
Accurate and efficient wavefield modeling underpins seismic structure and source studies. Traditional methods comply with physical laws but are computationally intensive. Data-driven methods, while opening new avenues for advancement, have yet to incorporate strict physical consistency. The principle of reciprocity is one of the most fundamental physical laws in wave propagation. We introduce the Reciprocity-Enforced Neural Operator (RENO), a transformer-based architecture for modeling seismic wave propagation that hard-codes the reciprocity principle. The model leverages the cross-attention mechanism and commutative operations to guarantee invariance under swapping source and receiver positions. Beyond improved physical consistency, the proposed architecture supports simultaneous realizations for multiple sources without crosstalk issues. This yields an order-of-magnitude inference speedup at a similar memory footprint over an reciprocity-unenforced neural operator on a realistic configuration. We demonstrate the functionality using the reciprocity relation for particle velocity fields under single forces. This architecture is also applicable to pressure fields under dilatational sources and travel-time fields governed by the eikonal equation, paving the way for encoding more complex reciprocity relations.
💡 Research Summary
The paper introduces the Reciprocity‑Enforced Neural Operator (RENO), a transformer‑based neural operator that hard‑codes the reciprocity principle of seismic wave propagation into its architecture. Traditional numerical solvers faithfully obey physical laws but are computationally expensive, while data‑driven neural operators can accelerate simulations but often ignore strict physical constraints, leading to violations of fundamental symmetries such as reciprocity. Reciprocity states that swapping source and receiver positions yields identical wavefields; this symmetry is widely exploited in seismic processing but has never been embedded directly into a neural operator’s design.
RENO addresses this gap by constructing a “Reciprocity Block” within a transformer backbone. Input data consist of medium parameters (Vp, Vs, density) and point‑cloud representations of source and receiver locations. A Graph Neural Operator (GNO) first compresses the point cloud into a small set of latent “super‑nodes,” providing discretization‑agnostic encoding. The compressed representation is then processed by several transformer layers. In the Reciprocity Block, the source and receiver coordinates are concatenated in both possible orders, passed through the same multilayer perceptron (MLP), and the resulting query vectors Q₁ and Q₂ are averaged. Because averaging is a commutative operation, the final query is invariant to the order of source and receiver, guaranteeing that the model’s output respects reciprocity regardless of training data. This query is fed to a cross‑attention decoder that simultaneously produces wavefield solutions for all requested source‑receiver pairs.
The authors train RENO on frequency‑domain Helmholtz solutions (particle velocity) for vertical forces in a 2‑D heterogeneous, anisotropic, visco‑elastic medium. Training data are generated with a high‑resolution staggered‑grid finite‑difference solver; 10 000 simulations are used for training and 1 000 for validation. A baseline neural operator lacking the Reciprocity Block serves as a comparison.
Key experimental findings:
-
Physical Consistency – When evaluated on a single‑simulation test where source and receiver are swapped, RENO reproduces the exact same wavefield as the original, confirming that reciprocity is hard‑coded. The baseline model, having learned reciprocity only implicitly, fails to generate meaningful signals in the swapped configuration.
-
Learning Dynamics – Both models achieve comparable ℓ₂ loss after full training, but RENO converges faster in early epochs. More importantly, a defined “reciprocal error” (relative difference between L(xₛ,xᵣ) and L(xᵣ,xₛ)) remains exactly zero for RENO throughout training, while the baseline’s error decreases but never reaches zero, reflecting the advantage of embedding physics directly into the architecture.
-
Computational Efficiency – Because the transformer’s cross‑attention can handle arbitrary numbers of source‑receiver pairs in a single query, RENO evaluates 234 sources × 339 receivers (79 326 pairs) in 0.31 s on an NVIDIA RTX A6000 GPU, using ~18 GB of memory. The baseline requires 9.34 s and 19 GB for the same task and runs out of memory when attempting larger batches. This translates to roughly a 30‑fold speedup at comparable memory usage.
The paper argues that hard‑coding reciprocity not only improves data efficiency—effectively doubling the information content of each training example—but also enables massive parallelism without crosstalk, a crucial benefit for large‑scale seismic surveys and real‑time monitoring. The authors note that the same architectural principle can be extended to other wave phenomena where reciprocity holds, such as pressure fields from dilatational sources or travel‑time fields governed by the eikonal equation. Future work is suggested to incorporate more complex source types (force couples, moments), tensorial responses (stress, strain), and hybrid physics‑informed loss functions to further enhance robustness.
In summary, RENO demonstrates that embedding fundamental physical symmetries directly into neural operator designs yields models that are physically exact, faster to train, and dramatically more efficient at inference, offering a promising pathway for scalable, physics‑consistent seismic modeling.
Comments & Academic Discussion
Loading comments...
Leave a Comment