[FLASH-USERS] FLASH4.2 released

Klaus Weide klaus at flash.uchicago.edu
Fri Feb 7 20:20:44 EST 2014


Dear Users,

 The Flash Center is pleased to announce the release of the new version of 
the FLASH code, version 4.2. 

Best,
-- the Flash Center Code Group

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!  New in FLASH 4.2  (since the FLASH 4.0.1 patch)   !!!!! 
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Added CCSN physics from Sean Couch:
* Complete nuclear EOS routines (physics/Eos/EosMain/Nuclear)
* Local neutrino heating/cooling (physics/sourceTerms/Heat/HeatMain/Neutrino)
* Multispecies neutrino leakage (physics/RadTrans/RadTransMain/NeutrinoLeakage)
* And if he gets around to it:  Multidimensional Core-Collapse supernova test problem
  (source/Simulation/SimulationMain/CCSN)

Added from Dean Townsley (not previously included, or previously not existing):
* physics/sourceTerms/Flame unit
* physics/sourceTerms/Turb unit

* Updates to Barnes Hut tree gravity solver contributed by Richard Wunsch,
  developed in collaboration with Frantisek Dinnbier (responsible for periodic
  and mixed boundary conditions) and Stefanie Walch.

Contributed from Christoph Federrath:
* Significant updates for Sink Particles
* New 'FromFile' implementation of Stir unit, now sitting beside the older
  'Generate' implementation

Two new optimizations of the unsplit solvers, Hydro_Unsplit and MHD_StaggeredMesh,
in source/physics/Hydro/HydroMain/unsplit. These new optimized unsplit solvers
are the default implementations of the unsplit hydrodynamics and MHD solvers
in FLASH 4.2. The old version of the unsplit solvers are now found in 
source/phycis/Hydro/HydroMain/unsplit_old.

Capabilities for 3T magnetohydrodynamics, for the first time.

New magnetic resistivity implementation, Spitzer HighZ, for HEDP problems.
Extended support for resistivity in cylindrical geometry in the unsplit solver.  

Laser - Async communication (experimental), some reorganization.
New feature - can run EnergyDeposition once every n time steps.

Updates to HYPRE solver coupling - avoid communicator leak with 2.0.9b.
Some changes in MGD code for increased correctness.

Threading for unsplit MHD is now available. Both strategies are supported by the new code. 

New, improved multipole Poisson solver, implementing the algorithmic 
refinements described in Couch, S.M., Graziani, C., and Flocke, N., ApJ 778, 
181 (2013), http://dx.doi.org/10.1088/0004-637X/778/2/181, 
http://arxiv.org/abs/1307.3135.  Specifically:
* Potential calculated at cell faces and averaged to produce cell-centered
  values, rather than directly at cell centers, so as to eliminate a convergence
  failure due to cell self-gravity
* Expansion center located at "square-density-weighted center-of-mass", to 
  minimize angular power at high multipoles


(some smaller changes:)
* The code includes stubs for several additional code units for which
  we do not include implementations in this release (TreeCol, NSE,
  SolidMechanics, IncompNS, ImBound). Implementations may be available
  separately from their authors, and / or may be included in a future
  FLASH release.
* Hydro codes avoid unnecessary Eos calls on guard cells, this uses skipSrl
  flag to Eos_guardCells routine.
* New Grid_bcApplyToRegionMixedGds API allows users to implement boundary
  conditions that take several types of solution variables (CENTER, FACE)
  into account.



Caveats, Limitations
--------------------

Simulations results obtained with this version of FLASH should not be
expected to match exactly with results from previous versions.
Differences can be due to algorithmic changes in solvers, in particular
the unsplit Hydro and MHD implementations. Results from simulations that
use the HYPRE library for Diffusion and/or RadTrans operators may be
slightly different because we invoke HYPRE with slightly different
parameters (based on suggestions from HYPRE authors).
Another source of unexpected differences is a change in how we compute
block boundary coordinates when an AMR Grid starts with more than one
root block (i.e. any of NBlockX,NBlockY,NBlockZ > 1), leading to
a difference in rounding errors under some scenarios.

Several code units have provisions for being threaded (OpenMP directives),
but these have not been recently tested and may require some adaptation.



The release is available at:

http://flash.uchicago.edu/site/flashcode/



Many, but not all parts of FLASH4 are backwards-compatible with
FLASH2. There are no architectural changes from FLASH3 to FLASH4. The
Flash code group has written extensive documentation 
detailing how to make the transition from FLASH2 to FLASH3 as smooth
as possible.  The user should look to:

http://flash.uchicago.edu/site/flashcode/user_support/

The website also contains other documentation including
a User's Guide and a developer's section.  A new feature in FLASH3
documentation is the online description of the public interface
routines to various code units.


FLASH should be portable to most UNIX-like operating systems with a 
python interpreter, Fortran 90 compiler,  C compiler, and MPI library. 



Development of the FLASH Code was funded by the DOE-supported
ASC/Alliance Center for Astrophysical Thermonuclear Flashes,
and continues to be funded by DOE NNSA and NSF.  We
acknowledge support received from Lawrence Livermore National
Laboratory and the University of Chicago.

All publications resulting from the use of the FLASH Code must
acknowledge the Flash Center for Computational Science.
  Addition of the following text to the paper acknowledgments
will be sufficient.

         "The software used in this work was in part developed by the
         DOE-supported Flash Center for Computational Science
         at the University of Chicago."




More information about the flash-users mailing list