[FLASH-USERS] MPI deadlock in amr_refine_derefine

Vishal Tiwari vtiwari at umassd.edu
Mon Feb 25 20:21:23 EST 2019


Hi Rayan,

Thank you for your email.

I tried with increased the number of nodes, but the issue remained. We have also tried to reduce the maxblocks per processor(currently using 30), but it didn't work. We also looked at the memory usage per node by logging into the nodes where the flash code was being executed, but the memory is not being exceeded.

Please find attached the log file for the run. We are using KNL compute nodes for the simulation.

I am running this code using 256 processes over 16 nodes.

Thank you for your help.

Regards,
Vishal Tiwari
Graduate Student, Physics
UMass, Dartmouth

________________________________
From: Ryan Farber <rjfarber at umich.edu>
Sent: Monday, February 25, 2019 2:57 PM
To: Vishal Tiwari
Cc: flash-users at flash.uchicago.edu; Robert Fisher
Subject: Re: [FLASH-USERS] MPI deadlock in amr_refine_derefine

Hi Vishal,

I've had a similar issue when I'm not allocating enough memory to a job on Stampede2. You can try requesting one additional node as a test.
Note that reducing maxblocks (in the thread you linked) has the effect of requiring less memory.

If you're still having trouble, could you attach your logfile and mention how many nodes (and how many processors if you're not using 256 as suggested by your DDT attachment) and what node type (SKX or KNL) you're using?

Best,
--------
Ryan


On Sun, Feb 24, 2019 at 12:27 PM Vishal Tiwari <vtiwari at umassd.edu<mailto:vtiwari at umassd.edu>> wrote:
Hello,

I am facing issues with my simulations when running on stampede2, which gets stuck in the refinement part of the code. The code keeps refining until the number of blocks requested is smaller than the number of tasks, but hangs when no. of blocks >  ntasks. Looking at the trace of the code using ddt suggests that there is a MPI deadlock. (see the figure attached).

This issue occurs only on the stampede2 because it was refining fine on stampede1 and works fine on a local cluster on my campus.

Further, I found that people were facing the exact same issue in this thread [1]<http://flash.uchicago.edu/pipermail/flash-users/2017-September/002402.html>, but the thread wasn't concluded with a solution.

I would be grateful for any pointers with regards to this issue.

Thank you!

[1] http://flash.uchicago.edu/pipermail/flash-users/2017-September/002402.html

Regards,
Vishal Tiwari
Graduate Student, Physics
University of Massachusetts, Dartmouth
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20190226/4cb14890/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: super3d.log
Type: text/x-log
Size: 36227 bytes
Desc: super3d.log
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20190226/4cb14890/attachment.bin>


More information about the flash-users mailing list