<div dir="ltr">Hi Claude,<div><br></div><div>It may help to see your log file (or at least the last few lines of it).</div><div>Typically when I have a run crash during refinement with no apparent cause, I increase available memory (increase nodes, reduce cores per node) and that fixes it.</div><div><br></div><div>Best,<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr">--------<div>Ryan</div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 8, 2019 at 8:51 AM Claude Cournoyer-Cloutier <<a href="mailto:cournoyc@mcmaster.ca">cournoyc@mcmaster.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;">Dear FLASH users,<div><br></div><div>I am using FLASH, coupled with the astrophysics code AMUSE, to model star formation in clusters and cluster formation. I am able to run a test version of my simulation, but encounter some issues with refinement when I am trying to run a more involved version — with the same parameters, but a higher possible refinement.</div><div><br></div><div>During refinement, I get the following output (truncated) and error message:</div><div><br></div><div><div><font face="Courier"> refined: total blocks = 585</font></div><div><font face="Courier"> iteration, no. not moved = 0 323</font></div><div><font face="Courier"> iteration, no. not moved = 1 27</font></div><div><font face="Courier"> iteration, no. not moved = 2 0</font></div><div><font face="Courier"> refined: total leaf blocks = 2311</font></div><div><font face="Courier"> refined: total blocks = 2641</font></div><div><font face="Courier">--------------------------------------------------------------------------</font></div><div><font face="Courier">MPI_ABORT was invoked on rank 45 in communicator MPI COMMUNICATOR 4 SPLIT FROM 0 </font></div><div><font face="Courier">with errorcode 4.</font></div><div><font face="Courier"><br></font></div><div><font face="Courier">NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</font></div><div><font face="Courier">You may or may not see output from other processes, depending on</font></div><div><font face="Courier">exactly when Open MPI kills them.</font></div></div><div><font face="Courier"><br></font></div><div>From a few discussions I’ve had, I think the issue might have something to do with the spreading of the blocks on different processors — but I cannot find anything useful online. Have any of you encountered that error, or a similar error in the past ?</div><div><br></div><div>Best regards,</div><div><br></div><div>Claude</div><div><br></div><div>—</div><div><br><div>
<div>Claude Cournoyer-Cloutier</div><div>Master’s Candidate, McMaster University </div><div>Department of Physics & Astronomy</div><div><br></div><br>
</div>
<br></div></div></blockquote></div>