[FLASH-USERS] Resend: Help regarding PARAMESH
Rambaks, Andris
andris.rambaks at rwth-aachen.de
Wed Feb 7 02:31:57 EST 2024
Sorry for having to resend the message again in a new thread. Something is up with my Thunderbird client.
Dear FLASH community,
although I am not using FLASH directly, I was hoping to find some insight regarding the PARAMESH package which FLASH uses. As most of you know the PARAMESH package has been unsupported for many years now. That's why I would appreciate any help from this community regarding known problems and bugs with PARAMESH, especially relating to inter-process communication.
The problem I am having relates to guard cell filling between neighboring blocks on different processes. When running only one process (serial execution) everything works fine. However when running multiple processes in parallel (e.g. mpirun -np 2 ...) the guard cells on processes mype > 0 (MPI_COMM_RANK (MPI_COMM_WORLD, mype, ierr)) are not filled at all or filled at some boundaries and not at others. In these cases both UNK and WORK guard cells take on the value 0.0. I checked the values of array NEIGH to confirm that the setup of the blocks (who neighbors who) is correct and therefore the problem most likely lies with communicating the data between processes.
In case the flag #DEFINE DEBUG has been set in the header file I get the following output with an error message (bold text) at the end (see below).
Please, get in touch if you have had similar problems with the AMR package or if you are also working extensively with PARAMESH.
Kind regards
Andris
--------------------------------------------------------------------------------------------------------------------------------------
pe 0 entered amr_check_refine
amr_check_refine : proc 0 step 1 refine(1:lnblocks) T
amr_check_refine : proc 0 step 2 jface 1
amr_check_refine : proc 0 waiting jface 1 testt nrecv 0 nsend 0
amr_check_refine : proc 0 step 3 jface 1
amr_check_refine : proc 0 step 2 jface 2
amr_check_refine : proc 0 waiting jface 2 testt nrecv 0 nsend 0
amr_check_refine : proc 0 step 3 jface 2
amr_check_refine : proc 0 step 2 jface 3
amr_check_refine : proc 0 waiting jface 3 testt nrecv 0 nsend 0
amr_check_refine : proc 0 step 3 jface 3
amr_check_refine : proc 0 step 2 jface 4
amr_check_refine : proc 0 waiting jface 4 testt nrecv 0 nsend 0
amr_check_refine : proc 0 step 3 jface 4
amr_check_refine : proc 0 step 4
pe 0 exiting amr_check_refine
pe 1 entered amr_check_refine
amr_check_refine : proc 1 step 1 refine(1:lnblocks)
amr_check_refine : proc 1 step 2 jface 1
amr_check_refine : proc 1 waiting jface 1 testt nrecv 0 nsend 0
amr_check_refine : proc 1 step 3 jface 1
amr_check_refine : proc 1 step 2 jface 2
amr_check_refine : proc 1 waiting jface 2 testt nrecv 0 nsend 0
amr_check_refine : proc 1 step 3 jface 2
amr_check_refine : proc 1 step 2 jface 3
amr_check_refine : proc 1 waiting jface 3 testt nrecv 0 nsend 0
amr_check_refine : proc 1 step 3 jface 3
amr_check_refine : proc 1 step 2 jface 4
amr_check_refine : proc 1 waiting jface 4 testt nrecv 0 nsend 0
amr_check_refine : proc 1 step 3 jface 4
amr_check_refine : proc 1 step 4
pe 1 exiting amr_check_refine
iteration, no. not moved = 0 0
message sizes 1 cc/nc/fc/ec 0 0 0 0
message sizes 2 cc/nc/fc/ec 0 0 0 0
message sizes 3 cc/nc/fc/ec 0 0 0 0
message sizes 4 cc/nc/fc/ec 0 0 0 0
message sizes 5 cc/nc/fc/ec 0 0 0 0
message sizes 6 cc/nc/fc/ec 0 0 0 0
message sizes 7 cc/nc/fc/ec 0 0 0 0
message sizes 8 cc/nc/fc/ec 0 0 0 0
message sizes 9 cc/nc/fc/ec 0 0 0 0
message sizes 10 cc/nc/fc/ec 16 25 40 40
message sizes 11 cc/nc/fc/ec 400 505 904 904
message sizes 12 cc/nc/fc/ec 16 25 40 40
message sizes 13 cc/nc/fc/ec 400 505 904 904
message sizes 14 cc/nc/fc/ec 10000 10201 20200 20200
message sizes 15 cc/nc/fc/ec 400 505 904 904
message sizes 16 cc/nc/fc/ec 16 25 40 40
message sizes 17 cc/nc/fc/ec 400 505 904 904
message sizes 18 cc/nc/fc/ec 16 25 40 40
message sizes 19 cc/nc/fc/ec 0 0 0 0
message sizes 20 cc/nc/fc/ec 0 0 0 0
message sizes 21 cc/nc/fc/ec 0 0 0 0
message sizes 22 cc/nc/fc/ec 0 0 0 0
message sizes 23 cc/nc/fc/ec 0 0 0 0
message sizes 24 cc/nc/fc/ec 0 0 0 0
message sizes 25 cc/nc/fc/ec 0 0 0 0
message sizes 26 cc/nc/fc/ec 0 0 0 0
message sizes 1 cc/nc/fc/ec 0 0 0 0
message sizes 2 cc/nc/fc/ec 0 0 0 0
message sizes 3 cc/nc/fc/ec 0 0 0 0
message sizes 4 cc/nc/fc/ec 0 0 0 0
message sizes 5 cc/nc/fc/ec 0 0 0 0
message sizes 6 cc/nc/fc/ec 0 0 0 0
message sizes 7 cc/nc/fc/ec 0 0 0 0
message sizes 8 cc/nc/fc/ec 0 0 0 0
message sizes 9 cc/nc/fc/ec 0 0 0 0
message sizes 10 cc/nc/fc/ec 16 25 40 40
message sizes 11 cc/nc/fc/ec 400 505 904 904
message sizes 12 cc/nc/fc/ec 16 25 40 40
message sizes 13 cc/nc/fc/ec 400 505 904 904
message sizes 14 cc/nc/fc/ec 10000 10201 20200 20200
message sizes 15 cc/nc/fc/ec 400 505 904 904
message sizes 16 cc/nc/fc/ec 16 25 40 40
message sizes 17 cc/nc/fc/ec 400 505 904 904
message sizes 18 cc/nc/fc/ec 16 25 40 40
message sizes 19 cc/nc/fc/ec 0 0 0 0
message sizes 20 cc/nc/fc/ec 0 0 0 0
message sizes 21 cc/nc/fc/ec 0 0 0 0
message sizes 22 cc/nc/fc/ec 0 0 0 0
message sizes 23 cc/nc/fc/ec 0 0 0 0
message sizes 24 cc/nc/fc/ec 0 0 0 0
message sizes 25 cc/nc/fc/ec 0 0 0 0
message sizes 26 cc/nc/fc/ec 0 0 0 0
message sizes 27 cc/nc/fc/ec 0 0 0 0
message sizes 28 cc/nc/fc/ec 0 0 0 0
message sizes 29 cc/nc/fc/ec 0 0 0 0
message sizes 30 cc/nc/fc/ec 0 0 0 0
message sizes 31 cc/nc/fc/ec 0 0 0 0
message sizes 32 cc/nc/fc/ec 0 0 0 0
message sizes 33 cc/nc/fc/ec 0 0 0 0
message sizes 34 cc/nc/fc/ec 0 0 0 0
message sizes 27 cc/nc/fc/ec 0 0 0 0
message sizes 28 cc/nc/fc/ec 0 0 0 0
message sizes 29 cc/nc/fc/ec 0 0 0 0
message sizes 30 cc/nc/fc/ec 0 0 0 0
message sizes 31 cc/nc/fc/ec 0 0 0 0
message sizes 32 cc/nc/fc/ec 0 0 0 0
message sizes 33 cc/nc/fc/ec 0 0 0 0
message sizes 34 cc/nc/fc/ec 0 0 0 0
message sizes 35 cc/nc/fc/ec 0 0 0 0
message sizes 36 cc/nc/fc/ec 0 0 0 0
message sizes 37 cc/nc/fc/ec 64 81 144 144
message sizes 38 cc/nc/fc/ec 800 909 1708 1708
message sizes 39 cc/nc/fc/ec 64 81 144 144
message sizes 35 cc/nc/fc/ec 0 0 0 0
message sizes 36 cc/nc/fc/ec 0 0 0 0
message sizes 37 cc/nc/fc/ec 64 81 144 144
message sizes 38 cc/nc/fc/ec 800 909 1708 1708
message sizes 39 cc/nc/fc/ec 64 81 144 144
message sizes 40 cc/nc/fc/ec 800 909 1708 1708
message sizes 41 cc/nc/fc/ec 10000 10201 20200 20200
message sizes 42 cc/nc/fc/ec 800 909 1708 1708
message sizes 43 cc/nc/fc/ec 64 81 144 144
message sizes 44 cc/nc/fc/ec 800 909 1708 1708
message sizes 45 cc/nc/fc/ec 64 81 144 144
message sizes 46 cc/nc/fc/ec 0 0 0 0
message sizes 47 cc/nc/fc/ec 0 0 0 0
message sizes 48 cc/nc/fc/ec 0 0 0 0
message sizes 49 cc/nc/fc/ec 0 0 0 0
message sizes 50 cc/nc/fc/ec 0 0 0 0
message sizes 51 cc/nc/fc/ec 0 0 0 0
message sizes 52 cc/nc/fc/ec 0 0 0 0
message sizes 53 cc/nc/fc/ec 0 0 0 0
message sizes 54 cc/nc/fc/ec 0 0 0 0
pe 0 nprocs 2 start packing
pe 0 irpe 1 commatrix_send 0
pe 0 irpe 2 commatrix_send 3
pe 0 :pack for rempe 2 in buffer layer 1 blk 1 from local lb 1 dtype 14 index 1 buf_dim 5955
pe 0 :pack for rempe 2 in buffer layer 1 blk 2 from local lb 2 dtype 14 index 40079 buf_dim 5955
pe 0 :pack for rempe 2 in buffer layer 1 blk 3 from local lb 3 dtype 14 index 80157 buf_dim 5955
pe 0 iblk 1 unpacking starting at index 1 buf_dim 0
put_buffer : pe 0 index on entry 4
put_buffer : pe 0 index update for cc 40004 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 0 tree info unpacked into block 1
pe 0 iblk 1 unpacked into 1
message sizes 40 cc/nc/fc/ec 800 909 1708 1708
message sizes 41 cc/nc/fc/ec 10000 10201 20200 20200
message sizes 42 cc/nc/fc/ec 800 909 1708 1708
message sizes 43 cc/nc/fc/ec 64 81 144 144
message sizes 44 cc/nc/fc/ec 800 909 1708 1708
message sizes 45 cc/nc/fc/ec 64 81 144 144
message sizes 46 cc/nc/fc/ec 0 0 0 0
message sizes 47 cc/nc/fc/ec 0 0 0 0
message sizes 48 cc/nc/fc/ec 0 0 0 0
message sizes 49 cc/nc/fc/ec 0 0 0 0
message sizes 50 cc/nc/fc/ec 0 0 0 0
message sizes 51 cc/nc/fc/ec 0 0 0 0
message sizes 52 cc/nc/fc/ec 0 0 0 0
message sizes 53 cc/nc/fc/ec 0 0 0 0
message sizes 54 cc/nc/fc/ec 0 0 0 0
pe 1 nprocs 2 start packing
pe 1 irpe 1 commatrix_send 2
pe 1 :pack for rempe 1 in buffer layer 1 blk 1 from local lb 1 dtype 14 index 1 buf_dim -1102110160
pe 1 :pack for rempe 1 in buffer layer 1 blk 2 from local lb 2 dtype 14 index 40079 buf_dim -1102110160
pe 1 irpe 2 commatrix_send 0
pe 1 iblk 1 unpacking starting at index 1 buf_dim 0
put_buffer : pe 1 index on entry 4
put_buffer : pe 1 index update for cc 40004 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 1 tree info unpacked into block 1
pe 1 iblk 1 unpacked into 1
pe 1 iblk 2 unpacking starting at index 40079 buf_dim 0
put_buffer : pe 1 index on entry 40082
put_buffer : pe 1 index update for cc 80082 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 1 tree info unpacked into block 2
pe 1 iblk 2 unpacked into 2
pe 0 iblk 2 unpacking starting at index 40079 buf_dim 0
put_buffer : pe 0 index on entry 40082
put_buffer : pe 0 index update for cc 80082 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 0 tree info unpacked into block 2
pe 0 iblk 2 unpacked into 2
pack_blocks : pe 0 lcc lfc lec lnc T F F F lguard_in_progress F iopt 1 ngcell_on_cc 4
pack_blocks : pe 0 loc_message_size(14) 40078
pack_blocks : pe 0 loc_message_size(17) 1678
pe 0 sizing send buf to pe 2 adding message type 14 size 40078 accumulated size 40078 invar 4 message_size_cc 10000 ibndvar 0 message_size_fc 20200 ivaredge 0 message_size_ec 20200 ivarcorn 0 message_size_nc 10201 offset 75
pe 0 sizing send buf to pe 2 adding message type 14 size 40078 accumulated size 80156 invar 4 message_size_cc 10000 ibndvar 0 message_size_fc 20200 ivaredge 0 message_size_ec 20200 ivarcorn 0 message_size_nc 10201 offset 75
pe 0 sizing send buf to pe 2 adding message type 14 size 40078 accumulated size 120234 invar 4 message_size_cc 10000 ibndvar 0 message_size_fc 20200 ivaredge 0 message_size_ec 20200 ivarcorn 0 message_size_nc 10201 offset 75
pe 0 tot_no_blocks_to_be_received 2
pe 0 sizing recv buf from pe 2 adding message type 14 size 40078 accumulated size 40078 iseg 1 mess_segment_loc 1 lindex 40078
pe 0 sizing recv buf from pe 2 adding message type 14 size 40078 accumulated size 80156 iseg 2 mess_segment_loc 40079 lindex 80156
pe 0 nprocs 2 start packing
pe 0 irpe 1 commatrix_send 0
pe 0 irpe 2 commatrix_send 3
pe 0 :pack for rempe 2 in buffer layer 1 blk 1 from local lb 1 dtype 14 index 1 buf_dim 120235
pe 1 iblk 3 unpacking starting at index 80157 buf_dim 0
put_buffer : pe 1 index on entry 80160
put_buffer : pe 1 index update for cc 120160 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 1 tree info unpacked into block 3
pe 1 iblk 3 unpacked into 3
pack_blocks : pe 1 lcc lfc lec lnc T F F F lguard_in_progress F iopt 1 ngcell_on_cc 4
pack_blocks : pe 1 loc_message_size(14) 40078
pack_blocks : pe 1 loc_message_size(17) 1678
pe 1 sizing send buf to pe 1 adding message type 14 size 40078 accumulated size 40078 invar 4 message_size_cc 10000 ibndvar 0 message_size_fc 20200 ivaredge 0 message_size_ec 20200 ivarcorn 0 message_size_nc 10201 offset 75
pe 1 sizing send buf to pe 1 adding message type 14 size 40078 accumulated size 80156 invar 4 message_size_cc 10000 ibndvar 0 message_size_fc 20200 ivaredge 0 message_size_ec 20200 ivarcorn 0 message_size_nc 10201 offset 75
pe 1 tot_no_blocks_to_be_received 3
pe 1 sizing recv buf from pe 1 adding message type 14 size 40078 accumulated size 40078 iseg 1 mess_segment_loc 1 lindex 40078
pe 1 sizing recv buf from pe 1 adding message type 14 size 40078 accumulated size 80156 iseg 2 mess_segment_loc 40079 lindex 80156
pe 1 sizing recv buf from pe 1 adding message type 14 size 40078 accumulated size 120234 iseg 3 mess_segment_loc 80157 lindex 120234
pe 1 nprocs 2 start packing
pe 1 irpe 1 commatrix_send 2
pe 1 :pack for rempe 1 in buffer layer 1 blk 1 from local lb 1 dtype 14 index 1 buf_dim 80157
pe 0 :pack for rempe 2 in buffer layer 1 blk 2 from local lb 2 dtype 14 index 40079 buf_dim 120235
pe 1 :pack for rempe 1 in buffer layer 1 blk 2 from local lb 2 dtype 14 index 40079 buf_dim 80157
pe 0 :pack for rempe 2 in buffer layer 1 blk 3 from local lb 3 dtype 14 index 80157 buf_dim 120235
pe 1 irpe 2 commatrix_send 0
pe 0 lblk 1 unpacking starting at index 1 buf_dim 80157
put_buffer : pe 0 index on entry 4
put_buffer : pe 0 index update for cc 40004 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 0 tree info unpacked into block 22
pe 1 lblk 1 unpacking starting at index 1 buf_dim 120235
put_buffer : pe 1 index on entry 4
put_buffer : pe 1 index update for cc 40004 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 1 tree info unpacked into block 21
pe 1 lblk 1 unpacked into 21
pe 0 lblk 1 unpacked into 22
pe 0 lblk 2 unpacking starting at index 40079 buf_dim 80157
put_buffer : pe 0 index on entry 40082
put_buffer : pe 0 index update for cc 80082 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 0 tree info unpacked into block 23
pe 1 lblk 2 unpacking starting at index 40079 buf_dim 120235
put_buffer : pe 1 index on entry 40082
put_buffer : pe 1 index update for cc 80082 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
put_buffer : pe 1 tree info unpacked into block 22
pe 1 lblk 2 unpacked into 22
pe 1 lblk 3 unpacking starting at index 80157 buf_dim 120235
put_buffer : pe 1 index on entry 80160
put_buffer : pe 1 index update for cc 120160 invar 4 ia ib ja jb ka kb 5 104 5 104 1 1 dtype 14
pe 0 lblk 2 unpacked into 23
put_buffer : pe 1 tree info unpacked into block 23
pe 1 lblk 3 unpacked into 23
Paramesh error : pe 1 pe address of required data is not in the list of communicating pes. remote_block 21 remote_pe 1 rem_pe 0 laddress 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 2 0 3 0 0 0
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 0.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20240207/f083afe9/attachment.htm>
More information about the flash-users
mailing list