[FLASH-USERS] too many refinement iterations

Slavin, Jonathan jslavin at cfa.harvard.edu
Tue Sep 24 12:42:52 EDT 2019


Hi Sean,

It looks like the total blocks is approaching the maximum allocated via
maxblocks, but I don't think the used memory is approaching the available
memory. My machine has 32 GB of memory. If I'm understanding the
calculation in the user's guide correctly the total memory requirement is
14 (NUNK_VARS) * ( 8 (nguard) + 16 (nxb)) * ( 8 + 16 (nyb))* 4000
(maxblocks) = 32 MB
is that right? The calculation is done in 2D so I assume that I don't have
to worry about guard cells, etc. for z.

When I tried using useFortran2003=True in setup I get a compilation error:
/home/jslavin/.local/mpich2-1.5/bin/mpif90
-I/home/jslavin/.local/hypre2.10.1/include -c -O2 -real_size 64 -I
/usr/include -DH5_USE_16_API -DMAXBLOCKS=5000 -DNXB=16 -DNYB=16 -DNZB=1
-DN_DIM=2 ut_sysMemCData.F90
ut_sysMemCData.F90(14): error #7005: Error in reading the compiled module
file.   [ISO_C_BINDING]
  use iso_c_binding, ONLY : c_double, c_ptr
------^
ut_sysMemCData.F90(18): error #6683: A kind type parameter must be a
compile-time constant.   [C_DOUBLE]
     real(c_double) :: measurement
----------^
ut_sysMemCData.F90(19): error #6406: Conflicting attributes or multiple
declaration of name.   [C_PTR]
     type(c_ptr) :: description
----------^
ut_sysMemCData.F90(14): error #6580: Name in only-list does not exist.
[C_DOUBLE]
  use iso_c_binding, ONLY : c_double, c_ptr
----------------------------^
ut_sysMemCData.F90(14): error #6580: Name in only-list does not exist.
[C_PTR]
  use iso_c_binding, ONLY : c_double, c_ptr
--------------------------------------^
compilation aborted for ut_sysMemCData.F90 (code 1)

Maybe this is fixed in FLASH 4.6?

Jon


On Tue, Sep 24, 2019 at 9:33 AM Sean M. Couch <couch at pa.msu.edu> wrote:

> The log file will tell you how many leaf and total blocks there are after
> each refinement event. I’ve found that you can bring maxblocks down pretty
> close to max(totBlks)/ranks. But, you still want to have a little bit of
> head room for the AMR to do its thing.
>
> Sean
>
> ----------------------------------------------------------------------
> Sean M. Couch, Ph.D.
> Assistant Professor
> Department of Physics and Astronomy
> Department of Computational Mathematics, Science, and Engineering
> Facility for Rare Isotope Beams
> Michigan State University
> 567 Wilson Rd, 3260 BPS
> East Lansing, MI 48824
> (517) 884-5035 --- couch at pa.msu.edu --- www.pa.msu.edu/~couch
> On Sep 24, 2019, 9:30 AM -0400, Slavin, Jonathan <jslavin at cfa.harvard.edu>,
> wrote:
>
> Hi Sean,
>
> Thanks, that's a good idea. I think that I may have been using more blocks
> than I needed. I'll try that out.
> Is there a way, after the fact to determine how many blocks were necessary
> for a run?
>
> Thanks,
> Jon
>
> On Tue, Sep 24, 2019 at 9:15 AM Couch, Sean <scouch at msu.edu> wrote:
>
> Hi Jon,
>
> There are various temporary/scratch arrays that are allocated at runtime.
> It could be the machine is running out of memory. Some arrays are sized
> according to maxblocks (or a _multiple_ thereof). Have you tried reducing
> maxblocks compiled into the application? Also, if you can setup the code
> with `useFortran2003=True`, you will get some very useful memory usage
> statistics in the log file just before the main iteration loop info starts
> printing.
>
> Sean
>
> ----------------------------------------------------------------------
> Sean M. Couch, Ph.D.
> Assistant Professor
> Department of Physics and Astronomy
> Department of Computational Mathematics, Science, and Engineering
> Facility for Rare Isotope Beams
> Michigan State University
> 567 Wilson Rd, 3260 BPS
> East Lansing, MI 48824
> (517) 884-5035 --- couch at pa.msu.edu --- www.pa.msu.edu/~couch
> On Sep 23, 2019, 11:27 AM -0400, Slavin, Jonathan <jslavin at cfa.harvard.edu>,
> wrote:
>
> Hi Marissa,
>
> Thanks for sharing your experience. I'm currently letting it run, as I
> mentioned, having changed the refinement criteria to remove pressure as a
> refine_var. Before I try anything more I'd like to see how that turns out.
> Running under gdb could take some time since even using mpi with 10 cpus on
> my desktop it takes a couple hours before it starts getting into that mode
> where it keeps increasing the refinement. I could, however, compile with
> various checks enabled and at low optimization, which could produce useful
> information.
> As for the units used, here they are (note that the drag unit under
> Particles/ParticlesForces and the dust unit under Particles/ParticlesMain
> are units that I have written):
> FLASH Units used:
>    Driver/DriverMain/Split
>    Grid/GridBoundaryConditions
>    Grid/GridMain/paramesh/interpolation/Paramesh4/prolong
>    Grid/GridMain/paramesh/interpolation/prolong
>    Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/headers
>    Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/mpi_source
>    Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/source
>
>  Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/utilities/multigrid
>    Grid/GridParticles/GridParticlesMapFromMesh
>    Grid/GridParticles/GridParticlesMapToMesh/Paramesh/MoveSieve
>    Grid/GridParticles/GridParticlesMove/Sieve/BlockMatch
>    Grid/GridParticles/GridParticlesMove/paramesh
>    Grid/GridSolvers/HYPRE/paramesh
>    IO/IOMain/hdf5/serial/PM
>    IO/IOParticles/hdf5/serial
>    Particles/ParticlesForces/shortRange/drag
>    Particles/ParticlesInitialization
>    Particles/ParticlesMain/active/dust
>    Particles/ParticlesMapping/Quadratic
>    Particles/ParticlesMapping/meshWeighting/MapToMesh
>    PhysicalConstants/PhysicalConstantsMain
>    RuntimeParameters/RuntimeParametersMain
>    Simulation/SimulationMain
>    flashUtilities/contiguousConversion
>    flashUtilities/general
>    flashUtilities/interpolation/oneDim
>    flashUtilities/nameValueLL
>    flashUtilities/rng
>    flashUtilities/sorting/quicksort
>    flashUtilities/system/memoryUsage/legacy
>    monitors/Logfile/LogfileMain
>    monitors/Timers/TimersMain/MPINative
>    physics/Diffuse/DiffuseMain/Unsplit
>    physics/Eos/EosMain/Gamma
>    physics/Hydro/HydroMain/split/PPM/PPMKernel
>    physics/materialProperties/Conductivity/ConductivityMain/PowerLaw
>
> Also note that this run uses Flash 4.3.
>
> Regards,
> Jon
>
> On Mon, Sep 23, 2019 at 11:02 AM Marissa Adams <madams at pas.rochester.edu>
> wrote:
>
> Hi Jonathan,
>
> I've encountered something quite similar earlier this past summer. When I
> ran with AMR on a supercomputer, it would refine all the way, then crash
> via segfault. I then tried running the same executable under gdb, and it
> ran just fine. However, while running, it wouldn't even register the
> refinement! And just pushed through the time step where it refined/crashed
> as if it were a pseudo-fixed grid. It would do this, even when I asked for
> only one level of refinement, or two. I am wondering if you can run yours
> under gdb and see if it does something similar? Then perhaps we know we are
> dealing with the same "heisenbug" I've encountered.
>
> After moving back to my local machine to debug further, I sorted through
> some uninitialized variables that may have been eating at the memory and
> causing that sort of crash. I added debug -O0 and additional -W flags to
> make sure it crashed appropriataely for me to sort through the weeds. It
> was a process.... but perhaps list what units you're using and I can tell
> you if I have any cross over, and what variables I initialized that seemed
> to fix the problem.
>
> Best,
> Marissa
>
> On Mon, Sep 23, 2019 at 9:05 AM Slavin, Jonathan <jslavin at cfa.harvard.edu>
> wrote:
>
> Hi,
>
> I've run into an issue with a simulation I'm running where, after evolving
> just fine, it begins to take more and more iterations during grid
> refinement until it fails. It's strange because I ran the same simulation
> on a different system with the same code and same parameters and it worked
> just fine. I did use different versions of the Intel fortran compiler and
> the hardware (cpus) were a bit different. But in terms of the software it
> is the same.
> I'm currently trying again with refinement on pressure removed -
> previously I refined on density and pressure now I'm trying just refining
> on density.
> If anyone has any other suggestions, I'd like to hear them.
>
> Thanks,
> Jon
>
> --
> Jonathan D. Slavin
> Astrophysicist - High Energy Astrophysics Division
> Center for Astrophysics | Harvard & Smithsonian
> Office: (617) 496-7981 | Cell: (781) 363-0035
> 60 Garden Street | MS 83 | Cambridge, MA 02138
>
>
>
>
> --
> Jonathan D. Slavin
> Astrophysicist - High Energy Astrophysics Division
> Center for Astrophysics | Harvard & Smithsonian
> Office: (617) 496-7981 | Cell: (781) 363-0035
> 60 Garden Street | MS 83 | Cambridge, MA 02138
>
>
>
>
> --
> Jonathan D. Slavin
> Astrophysicist - High Energy Astrophysics Division
> Center for Astrophysics | Harvard & Smithsonian
> Office: (617) 496-7981 | Cell: (781) 363-0035
> 60 Garden Street | MS 83 | Cambridge, MA 02138
>
>
>

-- 
Jonathan D. Slavin
Astrophysicist - High Energy Astrophysics Division
Center for Astrophysics | Harvard & Smithsonian
Office: (617) 496-7981 | Cell: (781) 363-0035
60 Garden Street | MS 83 | Cambridge, MA 02138
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20190924/aae1859f/attachment.htm>


More information about the flash-users mailing list