[FLASH-USERS] Isothermal EoS in FLASH4.2.1?

聖鈞 sebastian0117 at gmail.com
Tue Apr 1 01:16:15 EDT 2014


Dear all,

I am using FLASH4.2.1.
Does FLASH has isothermal EoS in this version?
I am trying with the test problem:Dust Collapse
(Simulation/SimulationMain/DustCollapse).
I know I can set gamma=1.01 or 1.001 to be the approximation.
But when I set gamma=1.01 or 1.001, I got error as below.
Here I plot part of my terminal, and I also attach dustcoll.log
(Or do I have another way to have implement of isothermal EoS?)

 Nonconvergence in subroutine rieman

 Zone index       =            7
 Zone center      =    404687500.00000000
 Iterations tried =           12
 Pressure error   =   1.43932041093256351E-005
 rieman_tol       =   1.00000000000000008E-005

 pL       =   1.13777270983543210E+025  pR       =  1.37211370962750130E+025
 uL       =    15606100617.345432       uR       =   16168035460.794884
 cL       =    44848361755597672.       cR       =   52508121896343928.
 gamma_eL =    1.0100000000000000       gamma_eR =   1.0100000000000000
 gamma_cL =    1.0100000000000000       gamma_cR =   1.0100000000000000

 Iteration history:

   n                    p*
   1    0.100000000000E+06
   2    0.115008918011E+26
   3    0.106812526421E+26
   4    0.477848229229E+24
   5    0.858215657750E+25
   6    0.724817795464E+25
   7    0.317435173360E+25
   8    0.490717231760E+25
   9    0.451833911565E+25
  10    0.442745226924E+25
  11    0.443296813890E+25
  12    0.443290433520E+25

 Terminating execution.

 Driver_abort called. See log file for details.
 Error message is Nonconvergence in subroutine rieman
 Calling MPI_Abort() for shutdown in   2 seconds!

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 6 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 7 with PID 15049 on
node stargate.phys.nthu.edu.tw exiting improperly. There are two reasons
this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[stargate.phys.nthu.edu.tw:15041] 6 more processes have sent help message
help-mpi-api.txt / mpi-abort
[stargate.phys.nthu.edu.tw:15041] Set MCA parameter
"orte_base_help_aggregate" to 0 to see all help / error messages
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20140401/d18a1c44/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: dustcoll.log
Type: text/x-log
Size: 34655 bytes
Desc: not available
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20140401/d18a1c44/attachment.bin>


More information about the flash-users mailing list