[FLASH-USERS] FLASH memory usage: Static, Dynamic and Machine Specific

Rahul Kashyap rkashyap at umassd.edu
Wed Aug 10 21:32:38 EDT 2016


Hi Klaus,

Thanks for replying.

Yes, I forgot to mention that I'm using new multipole implementation with
60 poles.

I have attached a small txt files with short summary on three runs which
very well describes my problem. 1024 proc have been used for all runs with
fixed lrefinemax and base blocks. I get three differenet error for three
different maxblocks value.

My understanding was that reasonable use of maxblocks avoids any such
memory failures.

Best,
-rahul

On Wed, Aug 10, 2016 at 6:11 PM, Klaus Weide <klaus at flash.uchicago.edu>
wrote:

> On Fri, 5 Aug 2016, Rahul Kashyap wrote:
>
> > More specifically, I'm having problem in using multipole Gravity solver
> > which causes memory allocation failure during initialization and also
> > evolution. I couldn't get far with my naive knowledge of FLASH's usage of
> > '-maxblocks'.
>
> Are you using the newer implementation of the multipole solver,
>
>   Grid/GridSolvers/Multipole_new
>
> rather than
>
>   Grid/GridSolvers/Multipole ?
>
> (One way to request the newer version is to use the setup shortcut
>  +newMpole .)
>
> The newer version should behave better in terms of memory usage.
>
> Klaus
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20160810/7c29d354/attachment.htm>
-------------- next part --------------

./setup binrayWD -3d +cube32 -auto -maxblocks=30 +uhd +newMpole --with-unit=Particles
lrefinemax=7;nblockx=4
1024 proc

ERROR: 
 refined: total leaf blocks =          932
 refined: total blocks =         1056
  Finished with Grid_initDomain, no restart
 Ready to call Hydro_init
 Hydro initialized
 Damp initialized
 Gravity initialized
 Initial dt verified

 Driver_abort called. See log file for details.
 Error message is
 [gr_mpoleAllocateRadialArrays] ERROR: gr_mpoleScratch allocate failed
 Calling MPI_Abort() for shutdown in   2 seconds!

 DRIVER_ABORT:

##########################################################################################################

./setup binaryWD -3d +cube32 -auto -maxblocks=10 +uhd +newMpole --with-unit=Particles
lrefinemax=7;nblockx=4

ERROR : 
 refined: total leaf blocks =          932
 refined: total blocks =         1056
  Finished with Grid_initDomain, no restart
 Ready to call Hydro_init
 Hydro initialized
 Damp initialized
 Gravity initialized
 Initial dt verified
[cli_656]: aborting job:
Fatal error in MPI_Allreduce:
Other MPI error, error stack:
MPI_Allreduce(937)........................: MPI_Allreduce(sbuf=0x2b679d403458, rbuf=0x2b67cff10010, count=53150764, MPI_DOUBLE_PRECISION, MPI_SUM, comm=0x84000000) failed
MPIR_Allreduce_impl(777)..................:
MPIR_Allreduce_index_tuned_intra_MV2(2486):
FUNCNAME(460).............................: Unable to allocate 425206112 bytes of memory for temporary buffer (probably out of memory)

#############################################################################################################

./setup binaryWD -3d +cube32 -auto -maxblocks=5 +uhd +newMpole --with-unit=Particles
lrefinemax=7;nblockx=4
1024 proc

ERROR: 
 iteration, no. not moved =            0           0
 refined: total leaf blocks =           64
 refined: total blocks =           64
 INFO: Grid_fillGuardCells is ignoring masking.
  iteration, no. not moved =            0          56
  iteration, no. not moved =            1          28
  iteration, no. not moved =            2           0
 refined: total leaf blocks =          106
 refined: total blocks =          112
 ERROR in process_fetch_list : guard block starting index           1
  not larger than lnblocks           1  processor no.           83
  maxblocks_alloc           50
[cli_83]: aborting job:
application called MPI_Abort(comm=0x84000000, 1) - process 83
 ERROR in process_fetch_list : guard block starting index           1
  not larger than lnblocks           1  processor no.           35
  maxblocks_alloc           50
[cli_35]: aborting job:









More information about the flash-users mailing list