[FLASH-USERS] FLASH 3.3 RELEASE

Klaus Weide klaus at flash.uchicago.edu
Wed Oct 20 18:40:58 EDT 2010



The Flash Center is pleased to announce the release of the next
version of the FLASH code, version 3.3. FLASH 3.3 includes several new
features and resolves many bugs found in previous releases up to
3.2. FLASH 3.3 will be the last release of FLASH3. Substantial High
Energy Density Physics capabilities are being added to FLASH, and the
next version, FLASH4, will include those capabilities. 



LICENSE 

The FLASH Code, and any part of this code, may only be released and
distributed by the Flash Center; individual users of the code are not
free to re-distribute the FLASH Code, or any of its components,
outside the Center. We require that all users sign a hardcopy version
of this License Agreement, and send it to the Flash
Center. Distribution of the Flash Code can only occur once a signed
License Agreement is received by us. 


New in FLASH Release 3.3

    * New high-order scheme implementations (3rd order PPM and 5th 
order WENO) for the unsplit hydrodynamics and MHD solvers. support
for using 6 guardcells in the WENO scheme.

    * New Eulerian advection scheme for species and mass scalars for
the two unsplit solvers. 

    * New implementation of interpolating two adiabatic indices (gamc
and game), and gravity components in the Riemann states for the two
unsplit solvers.

    * New implementation of supporting a 2nd order predictor scheme in
time for gravitational acceleration for the two unsplit solvers. 

    * New flux implementations for the two unsplit solvers: Marquina
flux for both solvers; hybrid (HLLC + HLL) for the unsplit hydro
solver.

    * New high-order interpolation schemes in including transverse
fluxes in the predictor step for the unsplit solvers. 

    * New Riemann state reconstruction implementation based on
limiting chracteristic variables for the split PPM solver. 

    * New implementation of hydrostatic boundary conditions (Dean
Townsley)

    * New implementation for thermal diffusion in uniform
grids, based on pencil grid, implicit if diffusivity is constant  

    * New runtime parameter to force a short timestep if necessary to
reach tmax exactly. 

    * Able to write a custom subset of particles to particle output
files (does not impact checkpoint files). User needs to create a
custom io_ptCreateSubset.F90. Only supported for HDF5 I/O library.  

    * Paramesh4dev option to "Avoid Orrery". This improves the scaling
of PARAMESH regridding events. It is only available in pm4dev PARAMESH
implementation and is switched on by default. It may be switched off
by setting the runtime parameter use_flash_surr_blks_fill to false. 

    * New, more scalable default restriction method in Grid_restrictAllLevels. 


Changes from FLASH3.2

    * "Diffuse" code unit reorganized, its API redefined

    * Bug fixes and updates for coupling gravity with the unsplit
hydro solvers. 

    * Reorganization and memory reduction on using scratch variables
in the two unsplit solvers. 

    * Isolated boundary conditions can be used in hybrid
(Multigrid+Pfft) Poisson solver, thus in Pfft Poisson solver with
Paramesh. Support for additional boundary condition types in Pfft
Poisson solver (periodic, hom. Dirichlet, hom. Neumann in various
combinations) 

    * Grid interpolation works correctly for > 4 guard cells (should
be even; 6 is tested and used)  

    * Trap obvious errors in timesteps, so that FLASH doesn't attempt
to keep running if a timestep limiter drops to zero. 

    * Fixed PARAMESH MPI calls that involve overlapping send and
receive buffers. Fixes "memcpy argument memory ranges overlap" errors
with new MPI library versions.  

    * Increased testing of HDF5 collective optimizations. In the last
release, we found that output files from runs on Blue Gene / P
contained bad data. The metadata caching bug is now fixed in ROMIO and
is included in mpich2-1.0.8 or higher, openmpi-1.4 or higher and
incorporated in the V1R4 series of BlueGene drivers.  

    * HDF5 attributes are now created and written collectively when
using a FLASH parallel I/O HDF5 implementation (See
http://www.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html). 

    * Removal of setup shortcut collective_hdf5. Usage of collective
I/O optimizations with HDF5 is now fully controlled by
useCollectiveHDF5 runtime parameter.  

    * Setup script keywords GRIDFACE, GRIDCENTERVAR, GRIDVAR have been
renamed to SCRATCHFACE, SCRATCHCENTERVAR, SCRATCHVAR respectively. 

    * Changes in API for Grid routines implemented in GridParticles
subunit. Changes in argument lists due to reorganization.  
Grid_moveParticlesGlobal has been removed, use Grid_moveParticles with
regrid=.TRUE. instead.   


Known Issues in FLASH 3.3 Release

    * Performance may be poor when using the default settings for
parallel-netcdf I/O implementation. This only affects simulations that
include particles. It happens because the application does not define
the total size of the file before entering data mode. The problem can
be avoided by setting nc_var_align_size=128000. See
http://trac.mcs.anl.gov/projects/parallel-netcdf/wiki/VariableAlignment

    * FLASH will abort in the HDF5 parallel I/O implementation when
there are zero blocks on some processors. Solutions to this problem
include: 1) Run your FLASH application on fewer processors, 2) Setup
your FLASH application with HDF5 serial I/O implementation, 3) Use the
experimental PM_argonne parallel I/O implementation, 4) Use pnetcdf
I/O implementation. 

    * Collective I/O optimizations are always disabled in NoFbs grid
applications that use HDF5 parallel I/O implementation. This is to
prevent a deadlock during metadata writes. If you have a NoFbs grid
application that needs the performance of collective I/O optimizations
with HDF5 library then you can setup your FLASH application with the
experimental PM_argonne parallel I/O implementation. 

    * Time limiting due to burning, even though has an implementation,
is turned off in most simulations by keeping the value of parameter
enucDtFactor very high. The implementation is therefore not well
tested and should be used with care.  


The release is available at:

http://flash.uchicago.edu/website/download/

A stripped down version of FLASH3 that may be downloaded without a
license is also available at

http://flash.uchicago.edu/website/codesupport/

This version is essentially the FLASH framework without any
implementations.  The Flash Center is continuing to provide support for
"add-ons" to the code. Please see the first chapter of the User's
Guide for details. 

Additionally, the FLASH testing software FlashTest, which became
available with the alpha release, continues to be available for
download at:

http://flash.uchicago.edu/website/codesupport/

Many, but not all parts of FLASH3 are backwards-compatible with
FLASH2.  The Flash code group has written extensive documentation
detailing how to make the transition from FLASH2 to FLASH3 as smooth
as possible.  The user should look to:

http://flash.uchicago.edu/website/codesupport/

The website also contains other documentation including
a User's Guide and a developer's section.  A new feature in FLASH3
documentation is the online description of the public interface
routines to various code units.


Development of the FLASH Code was funded by the DOE NNSA-ASC OASCR 
Flash Center.  We also acknowledge support received from Lawrence Livermore
National Laboratory and the University of Chicago.

All publications resulting from the use of the FLASH Code must
acknowledge the Flash Center.  Addition of the following text to the
paper acknowledgments will be sufficient.

         "The software used in this work was in part developed by the
         DOE NNSA-ASC OASCR Flash Center at the University of Chicago."


Enjoy!

The Flash Center CS/Applications Group




More information about the flash-users mailing list