[FLASH-USERS] MPI deadlock in amr_refine_derefine

Vishal Tiwari vtiwari at umassd.edu
Sat Mar 2 12:15:00 EST 2019


Hi Klaus,

We tried to run our code with FLASH4.5, but we were still getting a deadlock in the refinement part of the code.

Later we found out, that on using the default mpi library(Intel mpi/18.0.2) on stampede2 was leading to the deadlock, but when we tried with openmpi-3.1.2, we were able to get past the deadlock, which is very strange.

Regards,
Vishal
________________________________
From: Robert Fisher <rfisher1 at umassd.edu>
Sent: Monday, February 25, 2019 9:17 PM
To: Vishal Tiwari
Cc: Klaus Weide
Subject: Re: [FLASH-USERS] MPI deadlock in amr_refine_derefine

Hi Klaus :

  That's a great suggestion. Vishal and I were discussing migrating to 4.5 just recently, since there are a number of new features which are relevant to us -- including the improved treatment of cylindrical geometry and improvements to Tree gravity. We tend to be slow to upgrade, since the science we are doing is outgrowths of the work we started with Suoqing Ji around 2012, and is still using the same base version of FLASH to guarantee backwards compatibility.

  Best wishes,

  Bob

On Mon, Feb 25, 2019 at 8:25 PM Vishal Tiwari <vtiwari at umassd.edu<mailto:vtiwari at umassd.edu>> wrote:
Hi Klaus,

Thank you for your email.

I am using FLASH 4.0.1.

I will try the recent FLASH 4.5 version if I get the issue.

Thank you!

Regards,
Vishal
Graduate Student, Physics
UMass, Dartmouth

________________________________
From: Klaus Weide <klaus at flash.uchicago.edu<mailto:klaus at flash.uchicago.edu>>
Sent: Monday, February 25, 2019 6:08 PM
To: Vishal Tiwari
Cc: flash-users at flash.uchicago.edu<mailto:flash-users at flash.uchicago.edu>; Robert Fisher
Subject: Re: [FLASH-USERS] MPI deadlock in amr_refine_derefine

On Sun, 24 Feb 2019, Vishal Tiwari wrote:

> Hello,
>
> I am facing issues with my simulations when running on stampede2, which gets stuck in the refinement part of the code. The code keeps refining until the number of blocks requested is smaller than the number of tasks, but hangs when no. of blocks >  ntasks. Looking at the trace of the code using ddt suggests that there is a MPI deadlock. (see the figure attached).
>
> This issue occurs only on the stampede2 because it was refining fine on stampede1 and works fine on a local cluster on my campus.
>
> Further, I found that people were facing the exact same issue in this thread [1]<http://flash.uchicago.edu/pipermail/flash-users/2017-September/002402.html>, but the thread wasn't concluded with a solution.
>
> I would be grateful for any pointers with regards to this issue.

Vishal,

You did not say which version of FLASH you are using. I does not seem the
be the latest, since according to your tack trace, there should be a
WAITALL call on line 720 of mpi_amr_redist_blk.F90. This the case in

Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/mpi_source/mpi_amr_redist_blk.F90

of the FLASH 4.4 release code, but not in the same file from the FLASH
4.5 release. So there have been code changes in a file that plays an
important role in your stack trace. You should try whether you get the
same problem with the most recent release, FLASH 4.5.

Klaus



--
Dr. Robert Fisher
Associate Professor / Graduate Program Director
University of Massachusetts/Dartmouth
Department of Physics
285 Old Westport Road
North Dartmouth, Massachusetts 02747
robert.fisher at umassd.edu<mailto:robert.fisher at umassd.edu>
http://www.novastella.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20190302/37c8aa9b/attachment.htm>


More information about the flash-users mailing list