[FLASH-USERS] Problems with MHD simulation

Ernesto Zurbriggen ernesto at mail.oac.uncor.edu
Wed Feb 12 13:55:35 EST 2014


Hi all!
I have my own MHD simulation that it seems to work well in FLASH4.

The new FLASH4.2 release bring me good news because of the two new
optimizations of the unsplit solvers, Hydro_Unsplit and MHD_StaggeredMesh,
so let's work!

Now I'm running my simulation in FLASH4.2 and I'm having some problems  
that I never had before.
While in FLASH4 every thing is okay all time (regardless of UG, AMR,  
MHD_StaggeredMesh, etc), FLASH4.2 often (although no always) gives me  
the error shown below (especially using MPI and AMR).

I know it's a hard problem to diagnose.

Does anybody have an idea of what is happening or what I'm doing wrong?
Thanks all.


!................................
ernesto at ntb:~/flash4.2/object$ mpirun -np 4 ./flash4
  MaterialProperties initialized
  Cosmology initialized
  Source terms initialized
   iteration, no. not moved =            0           0
  refined: total leaf blocks =            4
  refined: total blocks =            5
  INFO: Grid_fillGuardCells is ignoring masking.
   iteration, no. not moved =            0           3
   iteration, no. not moved =            1           0
  refined: total leaf blocks =           10
  refined: total blocks =           13
   iteration, no. not moved =            0           4
   iteration, no. not moved =            1           0
  refined: total leaf blocks =           31
  refined: total blocks =           41
   iteration, no. not moved =            0          17
   iteration, no. not moved =            1           1
   iteration, no. not moved =            2           0
  refined: total leaf blocks =           76
  refined: total blocks =          101
[flash_convert_cc_hook] PE=      2, ivar= 14, why=1
  Trying to convert non-zero mass-specific variable to per-volume  
form, but dens is zero!
  DRIVER_ABORT: Trying to convert non-zero mass-specific variable to  
per-volume form, but dens is zero!

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

!................................





----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.




More information about the flash-users mailing list