JAX-in-Cell: A Differentiable Particle-in-Cell Code for Plasma Physics Applications

JAX-in-Cell: A Differentiable Particle-in-Cell Code for Plasma Physics Applications
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

JAX-in-Cell is a fully electromagnetic, multispecies, and relativistic 1D3V Particle-in-Cell (PIC) framework implemented entirely in JAX. It provides a modern, Python-based alternative to traditional PIC frameworks. It leverages Just-In-Time compilation and automatic vectorization to achieve the performance of traditional compiled codes on CPUs, GPUs, and TPUs. The resulting framework bridges the gap between educational scripts and production codes, providing a testbed for differentiable physics and AI integration that enables end-to-end gradient-based optimization. The code solves the Vlasov-Maxwell system on a staggered Yee lattice with either periodic, reflective, or absorbing boundary conditions, allowing both an explicit Boris solver and an implicit Crank-Nicolson method via Picard iteration to ensure energy conservation. Here, we detail the numerical methods employed, validate against standard benchmarks, and showcase the use of its auto-differentiation capabilities.


💡 Research Summary

JAX‑in‑Cell presents a fully differentiable, fully electromagnetic, multispecies, relativistic 1‑D3‑V Particle‑in‑Cell (PIC) framework written entirely in Python using the JAX ecosystem. By exploiting JAX’s just‑in‑time (JIT) compilation via XLA and automatic vectorization (jax.vmap), the code achieves performance comparable to traditional compiled PIC codes on CPUs, GPUs, and TPUs while retaining the flexibility of a high‑level language. The framework bridges the gap between pedagogical scripts and production‑grade simulators, offering a testbed for integrating differentiable physics with modern AI workflows.

The physical model solves the Vlasov‑Maxwell system on a staggered Yee lattice. Particles are represented by pseudo‑particles whose phase‑space density is discretized using a triangular (3‑point) spline shape function. This same kernel is employed for charge and current deposition as well as field interpolation, guaranteeing charge‑conserving deposition consistent with the continuity equation. The electromagnetic fields obey the standard Maxwell curl equations, reduced to one spatial dimension (∂/∂y = ∂/∂z = 0). Central‑difference operators are used for interior points, while ghost cells enforce periodic, reflective, or absorbing boundary conditions. A divergence‑cleaning step projects the longitudinal electric field component to satisfy Gauss’s law, ensuring global charge conservation.

Two time‑integration schemes are provided. The explicit scheme uses the classic Boris pusher for relativistic particle motion, coupled with a finite‑difference time‑domain (FDTD) update of the fields. The implicit scheme implements a Crank‑Nicolson discretization of both particle and field equations, solved via Picard iteration. The implicit method is energy‑conserving and permits much larger time steps than the explicit method, making it suitable for long‑duration simulations where stability is critical.

The code architecture is modular: simulation.py handles configuration parsing (TOML), memory allocation, and the main loop implemented with jax.lax.scan. algorithms.py contains the explicit and implicit integrators. particles.py implements relativistic and non‑relativistic Boris rotations and field interpolation, heavily vectorized. fields.py provides curl operators, Faraday and Ampère updates, and divergence cleaning. sources.py manages charge/current deposition and applies a multi‑pass binomial digital filter to suppress high‑frequency aliasing. boundary_conditions.py enforces particle and field boundary treatments. All state (E, B, particle positions, velocities, charges, masses) is stored in a single immutable tuple, enabling the entire simulation to be treated as a differentiable function.

Validation is performed on four canonical plasma benchmarks: Landau damping, the two‑stream instability, the Weibel instability, and the bump‑on‑tail instability. Initial conditions use a perturbed Maxwellian distribution with optional drift. Linear theory (including the plasma dispersion function Z(ξ)) provides analytical growth/damping rates for comparison. The explicit and implicit schemes both reproduce the expected rates; the implicit scheme maintains relative energy errors below 10⁻¹¹, whereas the explicit scheme shows unbounded drift in total energy for long runs.

Performance tests on NERSC’s Perlmutter system (AMD EPYC 7763 CPU and NVIDIA A100 GPU) demonstrate roughly two orders of magnitude speed‑up on the GPU for the same particle count. For 64 k particles, the GPU completes a full drift‑scan in about six seconds after the initial compilation. Switching to 32‑bit precision reduces memory usage and improves throughput, though small deviations from 64‑bit results can appear.

A key contribution is the demonstration of automatic differentiation (AD) for plasma‑physics optimization. By defining the dimensionless growth rate (\hat\gamma = \gamma/\omega_p) of the two‑stream instability as a function of the drift velocity (v_d), the authors compute both (\hat\gamma(v_d)) and its derivative (\partial\hat\gamma/\partial v_d) via a forward‑mode Jacobian‑vector product in a single pass. Using a damped Newton update, the drift velocity converges to the target growth rate within six iterations, and the sensitivity (|\partial\hat\gamma/\partial v_d|) diminishes as the optimizer approaches the solution. This showcases the feasibility of gradient‑based inverse problems, Bayesian inference of transport coefficients, real‑time control, laser‑pulse shaping, and training hybrid physics‑ML surrogate models.

The paper concludes by highlighting future directions: extending the framework to 2‑D and 3‑D geometries, incorporating collisional and radiative processes, and further integrating differentiable solvers into large‑scale plasma‑physics workflows. JAX‑in‑Cell thus establishes a versatile, high‑performance, and fully differentiable PIC platform that can serve both as an educational tool and as a production‑ready code for cutting‑edge plasma research.


Comments & Academic Discussion

Loading comments...

Leave a Comment