[FLASH-USERS] Paramesh4 crashes due to an error in the GridParticlesMove

Xiao-Dong Wang wang at irf.se
Mon May 2 10:33:42 EDT 2016


Hello,

I am trying to use paramesh on a custom simulation using particles. 

The codes run well with this setup:
	setup.py PlasmaObstacleIonosphere -auto -3d +ug +parallelIO -nxb=18 -nyb=16 -nzb=16
combined with 
	iProcs = 2      
	jProcs = 4
	kProcs = 4


To use paramesh, the setup command is :
	setup.py PlasmaObstacleIonosphere -auto -3d +pm4dev +parallelIO -nxb=18 -nyb=16 -nzb=16 -maxblocks=512

The relevant running time parameters are, in the flash.par file:

	iProcs = 2      
	jProcs = 4
	kProcs = 4

	lrefine_max = 3         
	lrefine_min = 1         

	refine_var_1 = "cde2"

	nblockx = 2             
	nblocky = 4             
	nblockz = 4             

	nrefs = 4               

	refine_on_particle_count = .false.	# The ultimate goal is to refine grids by the number of particles per cell. But refinement by field value also gives the error.
	max_particles_per_blk = 100000
	min_particles_per_blk = 10		# Now these 3 lines should not matter since refine_on_particle_count = .false., refinement should only depend on “cde2"

However, error always appears after the first step:

==============================================================================
 [ 05-02-2016  16:07:21.180 ] [gr_initGeometry] checking BCs for idir: 1
 [ 05-02-2016  16:07:21.182 ] [gr_initGeometry] checking BCs for idir: 2
 [ 05-02-2016  16:07:21.183 ] [gr_initGeometry] checking BCs for idir: 3
 [ 05-02-2016  16:07:51.760 ] [GRID amr_refine_derefine]: initiating refinement
 [ 05-02-2016  16:07:51.761 ] [GRID amr_refine_derefine]: redist. phase.  tot blks requested: 32
 [GRID amr_refine_derefine] min blks 1    max blks 1    tot blks 32
 [GRID amr_refine_derefine] min leaf blks 1    max leaf blks 1    tot leaf blks 32
 [ 05-02-2016  16:07:52.524 ] [GRID amr_refine_derefine]: refinement complete
 [ 05-02-2016  16:07:52.664 ] [GRID gr_expandDomain]: iteration=1, create level=2
 INFO: Grid_fillGuardCells is ignoring masking.
 [ 05-02-2016  16:07:52.685 ] [mpi_amr_comm_setup]: buffer_dim_send=711877, buffer_dim_recv=711877
 [ 05-02-2016  16:07:53.736 ] [GRID gr_expandDomain]: iteration=2, create level=3
 [ 05-02-2016  16:07:54.408 ] [GRID gr_expandDomain]: iteration=3, create level=3
 [ 05-02-2016  16:07:55.239 ] [GRID gr_expandDomain]: iteration=4, create level=3
 [ 05-02-2016  16:07:55.472 ] [GRID gr_expandDomain]: iteration=5, create level=3
 [ 05-02-2016  16:07:55.491 ] memory: /proc vsize    (MB):     4974.98 (min)       4975.39 (max)       4975.15 (avg)
 [ 05-02-2016  16:07:55.492 ] memory: /proc rss      (MB):     3307.12 (min)       3307.66 (max)       3307.46 (avg)
 [ 05-02-2016  16:07:55.493 ] memory: /proc vsize    (MB):     4974.98 (min)       4975.39 (max)       4975.15 (avg)
 [ 05-02-2016  16:07:55.494 ] memory: /proc rss      (MB):     3307.36 (min)       3307.71 (max)       3307.50 (avg)
 [ 05-02-2016  16:07:56.261 ] memory: /proc vsize    (MB):     4974.98 (min)       4975.39 (max)       4975.15 (avg)
 [ 05-02-2016  16:07:56.263 ] memory: /proc rss      (MB):     3310.65 (min)       3312.32 (max)       3310.90 (avg)
 [ 05-02-2016  16:07:56.273 ] [Particles_getGlobalNum]: Number of particles now: 442368
 [ 05-02-2016  16:07:56.396 ] [IO_writePlotfile] open: type=plotfile name=flash_hdf5_plt_cnt_0000
 [ 05-02-2016  16:07:58.051 ] [io_writeData]: wrote      32          blocks
 [ 05-02-2016  16:07:58.096 ] [IO_writePlotfile] close: type=plotfile name=flash_hdf5_plt_cnt_0000
 [ 05-02-2016  16:07:58.160 ] [IO_writeParticles] open: type=particles name=flash_hdf5_part_0000
 [ 05-02-2016  16:07:58.161 ] [IO_writeParticles]: done called Particles_updateAttributes()
 [ 05-02-2016  16:08:00.940 ] [IO_writeParticles] close: type=particles name=flash_hdf5_part_0000
 [ 05-02-2016  16:08:00.955 ] memory: /proc vsize    (MB):     4977.71 (min)       4978.27 (max)       4977.82 (avg)
 [ 05-02-2016  16:08:00.956 ] memory: /proc rss      (MB):     3312.65 (min)       3314.48 (max)       3312.93 (avg)
 [ 05-02-2016  16:08:00.957 ] [Driver_evolveFlash]: Entering evolution loop
 [ 05-02-2016  16:08:00.957 ] step: n=1 t=0.000000E+00 dt=2.500000E-02
 [ 05-02-2016  16:08:02.667 ] [DRIVER_ABORT]: Driver_abort() called by PE           0
 [ 05-02-2016  16:08:02.668 ] abort_message: src == gr_meshMe, and we still have unmatched neData, bad

The final abort message comes from 
<flash home>/source/Grid/GridParticles/GridParticlesMove/Sieve/BlockMatch/gr_ptNextProcPair.F90
But except for that no further information is available.

I have tried different values for lrefine_min/_max, nblock[xyz], n[xyz]b, as well as +pm40 instead of +pm4dev for setup, but this problem persists.

Could anyone give a hint on the problem? Thank you very much!

Best regards,
Xiao-Dong Wang

Swedish Institute of Space Physics (IRF)
Box 812, SE-981 28 Kiruna, Sweden
E-mail: wang at irf.se
Phone: +46 980 79008

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20160502/e2b93759/attachment.htm>


More information about the flash-users mailing list