Subsections


20.1 Time Integration

The active and passive particles have many different time integration schemes available. The subroutine Particles_advance handles the movement of particles through space and time. Because FLASH4 has support for including different types of both active and passive particles in a single simulation, the implementation of Particles_advance may call several helper routines of the form pt_advanceMETHOD (e.g., pt_advanceLeapfrog, pt_advanceEuler_active, pt_advanceCustom), each acting on an appropriate subset of existing particles. The METHOD here is determined by the ADVMETHOD part of the PARTICLETYPE Config statement (or the ADV par of a -particlemethods setup option) for the type of particle. See the Particles_advance source code for the mapping from ADVMETHOD keyword to pt_advanceMETHOD subroutine call.


20.1.1 Active Particles (Massive)

The active particles implementation includes different time integration schemes, long-range force laws (coupling all particles), and short-range force laws (coupling nearby particles). The attributes listed in Table 20.1 are provided by this subunit. A specific implementation of the active portion of Particles_advance is selected by a setup option such as -with-unit=Particles/ParticlesMain/active/massive/Leapfrog, or by specifying something like REQUIRES Particles/ParticlesMain/active/massive/Leapfrog in a simulation's Config file (or by listing the path in the Units file if not using the -auto configuration option). Further details are given is Sec:ParticlesUsing below.

Available time integration schemes for active particles include

The leapfrog-based integrators implemented under Particles/ParticlesMain/active/massive supply the additional particle attributes listed in Table 20.2.

Table 20.1: Particle attributes provided by active particles.
   
Attribute Type Description
MASS_PART_PROP REAL Particle mass
ACCX_PART_PROP REAL $ x$-component of particle acceleration
ACCY_PART_PROP REAL $ y$-component of particle acceleration
ACCZ_PART_PROP REAL $ z$-component of particle acceleration

Table 20.2: Particle attributes provided by leapfrog time integration.
Attribute Type Description
OACX_PART_PROP REAL $ x$-component of particle acceleration at previous timestep
OACY_PART_PROP REAL $ y$-component of particle acceleration at previous timestep
OACZ_PART_PROP REAL $ z$-component of particle acceleration at previous timestep


20.1.2 Charged Particles - Hybrid PIC

Collisionless plasmas are often modeled using fluid magnetohydrodynamics (MHD) models. However, the MHD fluid approximation is questionable when the gyroradius of the ions is large compared to the spatial region that is studied. On the other hand, kinetic models that discretize the full velocity space, or full particle in cell (PIC) models that treat ions and electrons as particles, are computationally very expensive. For problems where the time scales and spatial scales of the ions are of interest, hybrid models provide a compromise. In such models, the ions are treated as discrete particles, while the electrons are treated as a (often massless) fluid. This mean that the time scales and spatial scales of the electrons do not need to be resolved, and enables applications such as modeling of the solar wind interaction with planets. For a detailed discussion of different plasma models, see Ledvina et al. (2008).

20.1.2.1 The hybrid equations

In the hybrid approximation, ions are treated as particles, and electrons as a massless fluid. In what follows we use SI units. We have $ N_I$ ions at positions $ \mathbf{r}_i(t)$ [m] with velocities $ \mathbf{v}_i(t)$ [m/s], mass $ m_i$ [kg] and charge $ q_i$ [C], $ i=1,\ldots,N_I$. By spatial averaging we can define the charge density $ \rho_I(\mathbf{r},t)$ [Cm$ ^{-3}$] of the ions, their average velocity $ \mathbf{u}_I(\mathbf{r},t)$ [m/s], and the corresponding current density $ \mathbf{J}_I(\mathbf{r},t)=\rho_I\mathbf{u}_I$ [Cm$ ^{-2}$s$ ^{-1}$]. Electrons are modelled as a fluid with charge density $ \rho_e(\mathbf{r},t)$, average velocity $ \mathbf{u}_e(\mathbf{r},t)$, and current density $ \mathbf{J}_e(\mathbf{r},t)=\rho_e\mathbf{u}_e$. The electron number density is $ n_e=-\rho_e/e$, where $ e$ is the elementary charge. If we assume that the electrons are an ideal gas, then $ p_e=n_ekT_e$, so the pressure is directly related to temperature ($ k$ is Boltzmann's constant).

The trajectories of the ions are computed from the Lorentz force,

$\displaystyle \dfrac{d\mathbf{r}_i}{dt} = \mathbf{v}_i, \quad
\dfrac{d\mathbf{v...
... \left(
\mathbf{E}+\mathbf{v}_i\times\mathbf{B} \right), \quad
i=1,\ldots,N_I
$

where $ \mathbf{E}=\mathbf{E}(\mathbf{r},t)$ is the electric field, and $ \mathbf{B}=\mathbf{B}(\mathbf{r},t)$ is the magnetic field. The electric field is given by

$\displaystyle \mathbf{E} = \dfrac{1}{\rho_I} \left( -\mathbf{J}_I\times\mathbf{...
...hbf{B}
- \nabla p_e \right) + \eta \mathbf{J}
- \eta_h \nabla^2 \mathbf{J},
$

where $ \rho_I$ is the ion charge density, $ \mathbf{J}_I$ is the ion current, $ p_e$ is the electron pressure, $ \mu_0=4\pi\cdot10^{-7}$ is the magnetic constant, $ \mathbf{J}=\mu_0^{-1}\nabla\times\mathbf{B}$ is the current, and $ \eta_h$ is a hyperresistivity. Here we assume that $ p_e$ is adiabatic. Then the relative change in electron pressure is related to the relative change in electron density by

$\displaystyle \frac{p_e}{p_{e0}} = \left( \frac{n_e}{n_{e0}} \right)^{\gamma} ,
$

where the zero subscript denote reference values (here the initial values at $ t=0$). Then Faraday's law is used to advance the magnetic field in time,

$\displaystyle \frac{\displaystyle\partial\mathbf{B}}{\displaystyle\partial t} = -\nabla\times\mathbf{E}.
$

20.1.2.2 A cell-centered finite difference hybrid PIC solver

We use a cell-centered representation of the magnetic field on a uniform grid. All spatial derivatives are discretized using standard second order finite difference stencils. Time advancement is done by a predictor-corrector leapfrog method with subcycling of the field update, denoted cyclic leapfrog (CL) by Matthew (1994). An advantage of the discretization is that the divergence of the magnetic field is zero, down to round off errors. The ion macroparticles (each representing a large number of real particles) are deposited on the grid by a cloud-in-cell method (linear weighting), and interpolation of the fields to the particle positions are done by the corresponding linear interpolation. Initial particle positions are drawn from a uniform distribution, and initial particle velocities from a Maxwellian distribution. Further details of the algorithm can be found in Holmström, M. (2012,2013) and references therein, where an extension of the solver that include inflow and outflow boundary conditions was used to model the interaction between the solar wind and the Moon. In what follows we describe the FLASH implementation of the hybrid solver with periodic boundary conditions.

20.1.2.3 Hybrid solver implementation

The two basic operations needed for a PIC code are provided as standard operations in FLASH: At present the solver is restricted to a Cartesian uniform grid, since running an electromagnetic particle code on an adaptive grid is not straightforward. Grid refinement/coarsening must be accompanied by particle split/join operations. Also, jumps in the cell size can lead to reflected electromagnetic waves.

The equations are stated in SI units, so all parameters should be given in SI units, and all output will be in SI units. The initial configuration is a spatial uniform plasma consisting of two species. The first species, named pt_picPname_1 consists of particles of mass pt_picPmass_1 and charge pt_picPcharge_1. The initial (at $ t=0$) uniform number density is pt_picPdensity_1, and the velocity distribution is Maxwellian with a drift velocity of (pt_picPvelx_1, pt_picPvely_1, pt_picPvelz_1), and a temperature of pt_picPtemp_1. Each model macro-particle represents many real particles. The number of macro-particles per grid cell at the start of the simulation is set by pt_picPpc_1. So this parameter will control the total number of macro-particles of species 1 in the simulation.

All the above parameters are also available for a second species, e.g., pt_picPmass_2, which is initialized in the same way as the first species.

Table: Runtime parameters for the hybrid solver. Initial values are at $ t=0$. For each parameter for species 1, there is a corresponding parameter for species 2 (named with 2 instead of 1), e.g., pt_picPvelx_2.
Variable Type Default Description
pt_picPname_1 STRING "H+" Species 1 name
pt_picPmass_1 REAL 1.0 Species 1 mass, $ m_i$ [amu]
pt_picPcharge_1 REAL 1.0 Species 1 charge, $ q_i$ [e]
pt_picPdensity_1 REAL 1.0 Initial $ n_I$ species 1 [m$ ^{-3}$]
pt_picPtemp_1 REAL 1.5e5 Initial $ T_I$ species 1 [K]
pt_picPvelx_1 REAL 0.0 Initial $ \mathbf{u}_I$ species 1 [m/s]
pt_picPvely_1 REAL 0.0  
pt_picPvelz_1 REAL 0.0  
pt_picPpc_1 REAL 1.0 Number of macro-particle of species 1 per cell
sim_bx REAL 0.0 Initial $ \mathbf{B}$ (at $ t=0$) [T]
sim_by REAL 0.0  
sim_bz REAL 0.0  
sim_te REAL 0.0 Initial $ T_e$ [K]
pt_picResistivity REAL 0.0 Resistivity, $ \eta$ [$ \Omega$ m]
pt_picResistivityHyper REAL 0.0 Hyperresistivity, $ \eta_h$
sim_gam REAL -1.0 Adiabatic exponent for electrons
sim_nsub INTEGER 3 Number of CL B-field update subcycles (must be odd)
sim_rng_seed INTEGER 0 Seed the random number generator (if $ >0$)


The grid is initialized with the uniform magnetic field (sim_bx, sim_by, sim_bz).

Now for grid quantities. The cell averaged mass density is stored in pden, and the charge density in cden. The magnetic field is represented by (grbx, grby, grbz). A background field can be set during initialization in (gbx1, gby1, gbz1). We then solve for the deviation from this background field. Care must be take so that the background field is divergence free, using the discrete divergence operator. The easiest way to ensure this is to compute the background field as the rotation of a potential. The electric field is stored in (grex, grey, grez), the current density in (grjx, grjy, grjz) , and the ion current density in (gjix, gjiy, gjiz). A resistivity is stored in gres, thus it is possible to have a non-uniform resistivity in the simulation domain, but the default is to set the resistivity to the constant value of sim_resistivity everywhere. For post processing, separate fields are stored for charge density and ion current density for species 1 and 2.

Table 20.4: The grid variables for the hybrid solver that are most important for a user. For each property for species 1, there is a corresponding variable for species 2 (named with 2 instead of 1), e.g., cde2.
Variable Type Description
cden PER_VOLUME Total charge density, $ \rho $ [C/m$ ^3$]
grbx GENERIC Magnetic field, $ \mathbf{B}$ [T]
grby GENERIC  
grbz GENERIC  
grex GENERIC Electric field, $ \mathbf{E}$ [V/m]
grey GENERIC  
grez GENERIC  
grjx PER_VOLUME Current density, $ \mathbf{J}$ [A/m$ ^2$]
grjy PER_VOLUME  
grjz PER_VOLUME  
gjix PER_VOLUME Ion current density, $ \mathbf{J}_I$ [A/m$ ^2$]
gjiy PER_VOLUME  
gjiz PER_VOLUME  
cde1 PER_VOLUME Species 1 charge density, $ \rho_I$ [C/m$ ^3$]
jix1 PER_VOLUME Species 1 ion current density, $ \mathbf{J}_I$ [A/m$ ^2$]
jiy1 PER_VOLUME  
jiz1 PER_VOLUME  


Regarding particle properties. Each computational meta-particles is labeled by a positive integer, specie and has a mass and charge. As all FLASH particles they also each have a position $ \mathbf{r}_i$=(posx, posy, posz) and a velocity $ \mathbf{v}_i$=(velx, vely, velz). To be able to deposit currents onto the grid, each particle stores the current density corresponding to the particle, $ \mathbf{J}_{Ii}$. For the movement of the particles by the Lorentz force, we also need the electric and magnetic fields at the particle positions, $ \mathbf{E}(\mathbf{r}_i)$ and $ \mathbf{B}(\mathbf{r}_i)$, respectivly.

Table 20.5: Important particle properties for the hybrid solver. Note that this is for the computational macro-particles.
Variable Type Description
specie REAL Particle type (an integer number 1,2,3,...)
mass REAL Mass of the particle, $ m_i$ [kg]
charge REAL Charge of the particle, $ q_i$ [C]
jx REAL Particle ion current, $ \mathbf{J}_{Ii}=q_i\mathbf{v}_i$ [A m]
jy REAL  
jz REAL  
bx REAL Magnetic field at particle, $ \mathbf{B}(\mathbf{r}_i)$ [T]
by REAL  
bz REAL  
ex REAL Electric field at particle, $ \mathbf{E}(\mathbf{r}_i)$ [V/m]
ey REAL  
ez REAL  


Regarding the choice of time step. The timestep must be constant, $ \Delta t=$dtinit = dtmin = dtmax, since the leap frog solver requires this. For the solution to be stable the time step, $ \Delta t$, must be small enough. We will try and quantify this, and here we asume that the grid cells are cubes, $ \Delta x=\Delta y=\Delta z$, and that we have a single species plasma.

First of all, since the time advancement is explicit, there is the standard CFL condition that no particle should travel more than one grid cell in one time step, i.e. $ \Delta t  $max$ _i(\vert\mathbf{v}_i\vert) < \Delta x$. This time step is printed to standard output by FLASH (as dt_Part), and can thus be monitored.

Secondly, we also have an upper limit on the time step due to Whistler waves (Pritchett 2000),

$\displaystyle \Delta t < \frac{ \Omega_i^{-1} }{\pi}
\left( \frac{\Delta x}{\de...
...)^2,
\qquad
\delta_i = \frac{1}{\vert q_i\vert} \sqrt{\frac{m_i}{\mu_0 n}},
$

where $ \delta_i$ is the ion inertial length, and $ \Omega_i=\vert q_i\vert B/m_i$ is the ion gyrofrequency.

Finally, we also want to resolve the ion gyro motion by having several time steps per gyration. This will only put a limit on the time step if, approximately, $ \Delta x > 5 \delta_i$, and we then have that $ \Delta t < \Omega_i^{-1}$.

All this are only estimates that does not take into account, e.g., the initial functions, or the subcycling of the magnetic field update. In practice one can just reduce the time step until the solution is stable. Then for runs with different density and/or magnetic field strength, the time step will need to be scaled by the change in $ n_{I}/B$, e.g., if $ n_{I}$ is doubled, $ \Delta t$ can be doubled. The factor $ \left(\Delta x\right)^2$ implies that reducing the cell size by a factor of two will require a reduction in the time step by a factor of four.


20.1.3 Passive Particles

Passive particles may be moved using one of several different methods available in FLASH4. With the exception of Midpoint, they are all single-step schemes. The methods are either first-order or second-order accurate, and all are explicit, as described below. In all implementations, particle velocities are obtained by mapping grid-based velocities onto particle positions as described in Sec:Particles Mapping.

Numerically solving Equation (20.1) for passive particles means solving a set of simple ODE initial value problems, separately for each particle, where the velocities $ {\bf v}_i$ are given at certain discrete points in time by the state of the hydrodynamic velocity field at those times. The time step is thus externally given and cannot be arbitrarily chosen by a particle motion ODE solver20.1. Statements about the order of a method in this context should be understood as referring to the same method if it were applied in a hypothetical simulation where evaluations of velocities $ {\bf v}_i$ could be performed at arbitrary times (and with unlimited accuracy). Note that FLASH4 does not attempt to provide a particle motion ODE solver of higher accuracy than second order, since it makes little sense to integrate particle motion to a higher order than the fluid dynamics that provide its inputs.

In all cases, particles are advanced in time from $ t^n$ (or, in the case of Midpoint, from $ t^{n-1}$) to $ t^{n+1} = t^n + \Delta t^n$ using one of the difference equations described below. The implementations assume that at the time when Particles_advance is called, the fluid fields have already been updated to $ t^{n+1}$, as is the case with the Driver_evolveFlash implementations provided with FLASH4. A specific implementation of the passive portion of Particles_advance is selected by a setup option such as -with-unit=Particles/ParticlesMain/passive/Euler, or by specifying something like REQUIRES Particles/ParticlesMain/passive/Euler in a simulation's Config file (or by listing the path in the Units file if not using the -auto configuration option). Further details are given is Sec:ParticlesUsing below.

The time integration of passive particles is tested in the ParticlesAdvance unit test, which can be used to examine the convergence behavior, see Sec:ParticlesUnitTest.