<div dir="ltr">Hi Claude,<div><br></div><div>Thanks for attaching the log file. I'm pretty sure you need >= 105 processors (whereas you used 103 in the log file you sent me), which may explain why it's failing.</div><div><br></div><div>The end of the logfile shows paramesh is trying to make 10,265 blocks. With maxblocks=100 you should theoretically have just enough cores but in practice I've found FLASH needs 2% more blocks available than the maximum value "requested." That's where the 105 comes from.</div><div><br></div><div>The magic 2% comes from lots of runs on Comet, Stampede2, and local clusters. But that's all FLASH4.2.2 so the magic number may be a bit different for FLASH4.5.</div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><br></div><div>Best,</div><div dir="ltr">--------<div>Ryan</div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 8, 2019 at 2:03 PM Claude Cournoyer-Cloutier <<a href="mailto:cournoyc@mcmaster.ca">cournoyc@mcmaster.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;">Hi Ryan,<div><br></div><div>Thank you for your quick answer. Here is a copy of the log file for that simulation. I might try to increase the available memory, but we are already getting fairly down the queue for our cluster with the current requests.</div><div><br></div><div>Best,</div><div><br></div><div>Claude</div><div><br></div><div></div></div><div style="overflow-wrap: break-word;"><div><br><div><br><blockquote type="cite"><div>On Oct 8, 2019, at 4:41 PM, Ryan Farber <<a href="mailto:rjfarber@umich.edu" target="_blank">rjfarber@umich.edu</a>> wrote:</div><br><div><div dir="ltr">Hi Claude,<div><br></div><div>It may help to see your log file (or at least the last few lines of it).</div><div>Typically when I have a run crash during refinement with no apparent cause, I increase available memory (increase nodes, reduce cores per node) and that fixes it.</div><div><br></div><div>Best,<br clear="all"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr">--------<div>Ryan</div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 8, 2019 at 8:51 AM Claude Cournoyer-Cloutier <<a href="mailto:cournoyc@mcmaster.ca" target="_blank">cournoyc@mcmaster.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Dear FLASH users,<div><br></div><div>I am using FLASH, coupled with the astrophysics code AMUSE, to model star formation in clusters and cluster formation. I am able to run a test version of my simulation, but encounter some issues with refinement when I am trying to run a more involved version — with the same parameters, but a higher possible refinement.</div><div><br></div><div>During refinement, I get the following output (truncated) and error message:</div><div><br></div><div><div><font face="Courier"> refined: total blocks = 585</font></div><div><font face="Courier"> iteration, no. not moved = 0 323</font></div><div><font face="Courier"> iteration, no. not moved = 1 27</font></div><div><font face="Courier"> iteration, no. not moved = 2 0</font></div><div><font face="Courier"> refined: total leaf blocks = 2311</font></div><div><font face="Courier"> refined: total blocks = 2641</font></div><div><font face="Courier">--------------------------------------------------------------------------</font></div><div><font face="Courier">MPI_ABORT was invoked on rank 45 in communicator MPI COMMUNICATOR 4 SPLIT FROM 0 </font></div><div><font face="Courier">with errorcode 4.</font></div><div><font face="Courier"><br></font></div><div><font face="Courier">NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</font></div><div><font face="Courier">You may or may not see output from other processes, depending on</font></div><div><font face="Courier">exactly when Open MPI kills them.</font></div></div><div><font face="Courier"><br></font></div><div>From a few discussions I’ve had, I think the issue might have something to do with the spreading of the blocks on different processors — but I cannot find anything useful online. Have any of you encountered that error, or a similar error in the past ?</div><div><br></div><div>Best regards,</div><div><br></div><div>Claude</div><div><br></div><div>—</div><div><br><div>
<div>Claude Cournoyer-Cloutier</div><div>Master’s Candidate, McMaster University </div><div>Department of Physics & Astronomy</div><div><br></div><br>
</div>
<br></div></div></blockquote></div>
</div></blockquote></div><br><div>
<div>Claude Cournoyer-Cloutier</div><div>Master’s Candidate, McMaster University </div><div>Department of Physics & Astronomy</div><div><br></div><br>
</div>
<br></div></div></blockquote></div>