[FLASH-USERS] Isothermal EoS in FLASH4.2.1?

Seyit Hocuk seyit at astro.rug.nl
Tue Apr 1 05:26:28 EDT 2014


Dear Sebastian,

As an alternative, In the past I have utilized the cooling routine to 
enforce an isothermal condition. You basically overwrite the gas 
temperature by your isothermal temperature. It is quite simple. Your 
error, though, may be due to a problem of something else completely. Try 
optimizing the parameter eint_switch and make sure your cfl is not too high.

Best,
Seyit


On 04/01/2014 08:06 AM, Christoph Federrath wrote:
> Dear all,
>
> the new physics/sourceTerms/Polytrope unit can be used to get an isothermal equation of state. See source/Simulation/SimulationMain/unitTest/SinkMomTest as a template for how to include it in your simulation setup and how to use it in the flash.par.
>
> Your error below, however, is in the Riemann solver (in this case PPM) and using the Polytrope unit will not necessarily solve your problem. If it still does not work with the Polytrope unit properly configured in your flash.par, try reducing the CFL or switch to a hydro solver that is more stable and or try using dual energy formalism (by setting eint_switch).
>
> Christoph
>
> ________________________________
> Dr. Christoph Federrath
> Monash Centre for Astrophysics,
> School of Mathematical Sciences,
> Monash University,
> Clayton, VIC 3800, Australia
> +61 3 9905 9760
> http://www.ita.uni-heidelberg.de/~chfeder/index.shtml?lang=en
>
>
> Am 01.04.2014 um 16:16 schrieb 聖鈞:
>
>> Dear all,
>>
>> I am using FLASH4.2.1.
>> Does FLASH has isothermal EoS in this version?
>> I am trying with the test problem:Dust Collapse (Simulation/SimulationMain/DustCollapse).
>> I know I can set gamma=1.01 or 1.001 to be the approximation.
>> But when I set gamma=1.01 or 1.001, I got error as below.
>> Here I plot part of my terminal, and I also attach dustcoll.log
>> (Or do I have another way to have implement of isothermal EoS?)
>>
>>   Nonconvergence in subroutine rieman
>>    
>>   Zone index       =            7
>>   Zone center      =    404687500.00000000
>>   Iterations tried =           12
>>   Pressure error   =   1.43932041093256351E-005
>>   rieman_tol       =   1.00000000000000008E-005
>>    
>>   pL       =   1.13777270983543210E+025  pR       =  1.37211370962750130E+025
>>   uL       =    15606100617.345432       uR       =   16168035460.794884
>>   cL       =    44848361755597672.       cR       =   52508121896343928.
>>   gamma_eL =    1.0100000000000000       gamma_eR =   1.0100000000000000
>>   gamma_cL =    1.0100000000000000       gamma_cR =   1.0100000000000000
>>    
>>   Iteration history:
>>    
>>     n                    p*
>>     1    0.100000000000E+06
>>     2    0.115008918011E+26
>>     3    0.106812526421E+26
>>     4    0.477848229229E+24
>>     5    0.858215657750E+25
>>     6    0.724817795464E+25
>>     7    0.317435173360E+25
>>     8    0.490717231760E+25
>>     9    0.451833911565E+25
>>    10    0.442745226924E+25
>>    11    0.443296813890E+25
>>    12    0.443290433520E+25
>>    
>>   Terminating execution.
>>    
>>   Driver_abort called. See log file for details.
>>   Error message is Nonconvergence in subroutine rieman
>>   Calling MPI_Abort() for shutdown in   2 seconds!
>>   
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 6 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun has exited due to process rank 7 with PID 15049 on
>> node stargate.phys.nthu.edu.tw exiting improperly. There are two reasons this could occur:
>>
>> 1. this process did not call "init" before exiting, but others in
>> the job did. This can cause a job to hang indefinitely while it waits
>> for all processes to call "init". By rule, if one process calls "init",
>> then ALL processes must call "init" prior to termination.
>>
>> 2. this process called "init", but exited without calling "finalize".
>> By rule, all processes that call "init" MUST call "finalize" prior to
>> exiting or it will be considered an "abnormal termination"
>>
>> This may have caused other processes in the application to be
>> terminated by signals sent by mpirun (as reported here).
>> --------------------------------------------------------------------------
>> [stargate.phys.nthu.edu.tw:15041] 6 more processes have sent help message help-mpi-api.txt / mpi-abort
>> [stargate.phys.nthu.edu.tw:15041] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
>> <dustcoll.log>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: seyit.vcf
Type: text/x-vcard
Size: 321 bytes
Desc: not available
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20140401/330081d3/attachment-0001.vcf>


More information about the flash-users mailing list