An Energy Conserving Parallel Hybrid Plasma Solver

Reading time: 5 minute
...

📝 Original Info

  • Title: An Energy Conserving Parallel Hybrid Plasma Solver
  • ArXiv ID: 1010.3291
  • Date: 2010-10-15
  • Authors: M. Holmstrom

📝 Abstract

We investigate the performance of a hybrid plasma solver on the test problem of an ion beam. The parallel solver is based on cell centered finite differences in space, and a predictor-corrector leapfrog scheme in time. The implementation is done in the FLASH software framework. It is shown that the solver conserves energy well over time, and that the parallelization is efficient (it exhibits weak scaling).

💡 Deep Analysis

Deep Dive into An Energy Conserving Parallel Hybrid Plasma Solver.

We investigate the performance of a hybrid plasma solver on the test problem of an ion beam. The parallel solver is based on cell centered finite differences in space, and a predictor-corrector leapfrog scheme in time. The implementation is done in the FLASH software framework. It is shown that the solver conserves energy well over time, and that the parallelization is efficient (it exhibits weak scaling).

📄 Full Content

Modeling of collisionless plasmas are often done using fluid magnetohydrodynamic models (MHD). The MHD fluid approximation is however questionable when the gyro radius of the ions are large compared to the spatial region that is studied. On the other hand, kinetic models that discretize the full velocity space, or full particle in cell (PIC) models that treat ions and electrons as particles, are very computational expensive. For problems where the ion time-and spatial scales are of interest, hybrid models provide a compromise. In such models, the ions are treated as discrete particles, while the electrons are treated as a (often massless) fluid. This mean that the electron time-and spatial scales do not need to be resolved, and enables applications such as modeling of the solar wind interaction with planets. For a detailed discussion of different models, see Ledvina et al. (2008).

Here we present an finite difference implementation of a hybrid model in the FLASH parallel computational framework, along with test cases that show that the implementation scales well and conserve energy well.

In the hybrid approximation, ions are treated as particles, and electrons as a massless fluid. In what follows we use SI units. The trajectory of an ion, r(t) and v(t), with charge q and mass m, is computed from the Lorentz force,

where E = E(r, t) is the electric field, and B = B(r, t) is the magnetic field.

The electric field is given by

where ρ I is the ion charge density, J I is the ion current, p e is the electron pressure, and µ 0 = 4π • 10 -7 is the magnetic constant. Then Faraday’s law is used to advance the magnetic field in time,

We use a cell-centered representation of the magnetic field on a uniform grid. All spatial derivatives are discretized using standard second order stencils. Time advancement is done by a predictor-corrector leapfrog method with subcycling of the field update, denoted cyclic leapfrog (CL) by Matthews (1994). An advantage of the discretization is that the divergence of the magnetic field is zero, down to round off errors. The ion macroparticles (each representing a large number of real particles) are deposited on the grid by a cloud-in-cell method (linear weighting), and interpolation of the fields to the particle positions are done by the corresponding linear interpolation. Initial particle positions are drawn from a uniform distribution, and initial particle velocities from a Maxwellian distribution. Further details of the algorithm can be found in Holmström (2011).

We use an existing software framework, FLASH, developed at the University of Chicago (Fryxell et al., 2000), that implements a block-structured adaptive (or uniform) Cartesian grid and is parallelized using the Message-Passing Interface (MPI) library for communication. It is written in Fortran 90, well structured into modules, and open source. Output is handled by the HDF5 library, providing parallel I/O. Although the FLASH framework has mostly been used for fluid modeling, it has support for particles which we have used to implement a hybrid solver using the latest version of the framework, FLASH3. The advantage of using an existing framework when implementing a solver is that all grid operations, parallelization and file handling is done by standard software calls that have been well-tested. Also, there is an existing infrastructure for parameter files and setup directories that simplifies code handling. The concept of a setup directory is that one can place modified versions of any routine in the directory, and this new version will be used during the build process. This is an easy way to handle different versions of code for different runs of the solver.

In particular, many of the basic operations needed for a PIC code are provided as standard operations in FLASH:

• Deposit charges onto the grid: call Grid mapParticlesToMesh()

• Interpolate fields to particle positions: call Grid mapMeshToParticles()

• Ghost cell update for all blocks: call Grid fillGuardCells()

The advantage of a parallel solver is the ability to handle larger computational problems than on a single processor, both in terms of computational time and memory requirements. This is especially important for PIC solvers which are computational intensive compared to fluid solvers. Since we typically have 10-100 particles per cell, the computational work will be dominated by operations on the particles: moving the particles and grid-particle operations.

That a code works well in parallel is usually investigated by looking at how the code scales. For the case of strong scaling, a fixed size problem is run on different numbers of processors. Ideally the execution time should decrease proportional to the number of processors (linear scaling). This is however difficult to achieve in real world applications. Sequential parts of the program quickly dominate the execution time. An alternative concept is that of weak scaling.

Here the problem size is increa

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut