[FLASH-USERS] FLASH memory usage: Static, Dynamic and Machine Specific

Norbert Flocke flocke at flash.uchicago.edu
Thu Aug 11 09:57:40 EDT 2016


Hi Rahul,

>From your setup script I see that you are using:

    lrefinemax = 7
    nblockx = 4
    +cube32

This would mean that for your run you are asking for:

    (2^3)^(lrefmax-1) * 4 = 1048576 = 1024^2 blocks

Divide this by 1024 procs and you get: 1024 per proc.
So you should at least put maxblocks > 1024. On the other
hand +cube32 means 32^3 cells per block. So on each proc
there should be at least 32^3 * 1024 = 33,554,432 of
memory, if you are using only one variable per cell (which
you are probably not, you are using more). Your memory
requirements are excessive.

Best,
Norbert


On Wed, 10 Aug 2016, Rahul Kashyap wrote:

> Hi Klaus,
>
> Thanks for replying.
>
> Yes, I forgot to mention that I'm using new multipole implementation with
> 60 poles.
>
> I have attached a small txt files with short summary on three runs which
> very well describes my problem. 1024 proc have been used for all runs with
> fixed lrefinemax and base blocks. I get three differenet error for three
> different maxblocks value.
>
> My understanding was that reasonable use of maxblocks avoids any such
> memory failures.
>
> Best,
> -rahul
>
> On Wed, Aug 10, 2016 at 6:11 PM, Klaus Weide <klaus at flash.uchicago.edu>
> wrote:
>
>> On Fri, 5 Aug 2016, Rahul Kashyap wrote:
>>
>>> More specifically, I'm having problem in using multipole Gravity solver
>>> which causes memory allocation failure during initialization and also
>>> evolution. I couldn't get far with my naive knowledge of FLASH's usage of
>>> '-maxblocks'.
>>
>> Are you using the newer implementation of the multipole solver,
>>
>>   Grid/GridSolvers/Multipole_new
>>
>> rather than
>>
>>   Grid/GridSolvers/Multipole ?
>>
>> (One way to request the newer version is to use the setup shortcut
>>  +newMpole .)
>>
>> The newer version should behave better in terms of memory usage.
>>
>> Klaus
>>
>



More information about the flash-users mailing list