<div dir="ltr"><div><div><div><div>Dear all,<br><br>I am using FLASH4.2.1.<br></div>Does FLASH has isothermal EoS in this version?<br></div><div></div>I am trying with the test problem:Dust Collapse (Simulation/SimulationMain/DustCollapse).<br>
I know I can set gamma=1.01 or 1.001 to be the approximation.<br>
</div>But when I set gamma=1.01 or 1.001, I got error as below.<br>Here I plot part of my terminal, and I also <span class="">attach </span>dustcoll.log<br></div>(Or do I have another way to have implement of isothermal EoS?)<br>
<div><br> Nonconvergence in subroutine rieman<br> <br> Zone index = 7<br> Zone center = 404687500.00000000 <br> Iterations tried = 12<br> Pressure error = 1.43932041093256351E-005<br>
rieman_tol = 1.00000000000000008E-005<br> <br> pL = 1.13777270983543210E+025 pR = 1.37211370962750130E+025<br> uL = 15606100617.345432 uR = 16168035460.794884 <br> cL = 44848361755597672. cR = 52508121896343928. <br>
gamma_eL = 1.0100000000000000 gamma_eR = 1.0100000000000000 <br> gamma_cL = 1.0100000000000000 gamma_cR = 1.0100000000000000 <br> <br> Iteration history:<br> <br> n p*<br>
1 0.100000000000E+06<br> 2 0.115008918011E+26<br> 3 0.106812526421E+26<br> 4 0.477848229229E+24<br> 5 0.858215657750E+25<br> 6 0.724817795464E+25<br> 7 0.317435173360E+25<br> 8 0.490717231760E+25<br>
9 0.451833911565E+25<br> 10 0.442745226924E+25<br> 11 0.443296813890E+25<br> 12 0.443290433520E+25<br> <br> Terminating execution.<br> <br> Driver_abort called. See log file for details.<br> Error message is Nonconvergence in subroutine rieman<br>
Calling MPI_Abort() for shutdown in 2 seconds!<br> <br>--------------------------------------------------------------------------<br>MPI_ABORT was invoked on rank 6 in communicator MPI_COMM_WORLD <br>with errorcode 1.<br>
<br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br>
--------------------------------------------------------------------------<br>mpirun has exited due to process rank 7 with PID 15049 on<br>node <a href="http://stargate.phys.nthu.edu.tw">stargate.phys.nthu.edu.tw</a> exiting improperly. There are two reasons this could occur:<br>
<br>1. this process did not call "init" before exiting, but others in<br>the job did. This can cause a job to hang indefinitely while it waits<br>for all processes to call "init". By rule, if one process calls "init",<br>
then ALL processes must call "init" prior to termination.<br><br>2. this process called "init", but exited without calling "finalize".<br>By rule, all processes that call "init" MUST call "finalize" prior to<br>
exiting or it will be considered an "abnormal termination"<br><br>This may have caused other processes in the application to be<br>terminated by signals sent by mpirun (as reported here).<br>--------------------------------------------------------------------------<br>
[<a href="http://stargate.phys.nthu.edu.tw:15041">stargate.phys.nthu.edu.tw:15041</a>] 6 more processes have sent help message help-mpi-api.txt / mpi-abort<br>[<a href="http://stargate.phys.nthu.edu.tw:15041">stargate.phys.nthu.edu.tw:15041</a>] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages<br>
</div></div>