<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body vlink="#551A8B" text="#000000" link="#0B6CDA" bgcolor="black"
alink="#EE0000">
Dear FLASH community,
<div class="moz-forward-container">
<p>although I am not using FLASH directly, I was hoping to find
some insight regarding the PARAMESH package which FLASH uses. As
most of you know the PARAMESH package has been unsupported for
many years now. That's why I would appreciate any help from this
community regarding known problems and bugs with PARAMESH,
especially relating to inter-process communication.<br>
</p>
<p>The problem I am having relates to guard cell filling between
neighboring blocks on different processes. When running only one
process (serial execution) everything works fine. However when
running multiple processes in parallel (e.g. mpirun -np 2 ...)
the guard cells on processes mype > 0 (MPI_COMM_RANK
(MPI_COMM_WORLD, mype, ierr)) are not filled at all or filled at
some boundaries and not at others. In these cases both UNK and
WORK guard cells take on the value 0.0. I checked the values of
array NEIGH to confirm that the setup of the blocks (who
neighbors who) is correct and therefore the problem most likely
lies with communicating the data between processes.</p>
<p>In case the flag #DEFINE DEBUG has been set in the header file
I get the following output with an error message (bold text) at
the end (see below).</p>
<p>Please, get in touch if you have had similar problems with the
AMR package or if you are also working extensively with
PARAMESH.<br>
</p>
<p>Kind regards</p>
<p>Andris<br>
</p>
<p>--------------------------------------------------------------------------------------------------------------------------------------<br>
</p>
<p>pe 0 entered amr_check_refine<br>
amr_check_refine : proc 0 step 1
refine(1:lnblocks) T<br>
amr_check_refine : proc 0 step 2 jface 1<br>
amr_check_refine : proc 0 waiting jface
1 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 0 step 3 jface 1<br>
amr_check_refine : proc 0 step 2 jface 2<br>
amr_check_refine : proc 0 waiting jface
2 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 0 step 3 jface 2<br>
amr_check_refine : proc 0 step 2 jface 3<br>
amr_check_refine : proc 0 waiting jface
3 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 0 step 3 jface 3<br>
amr_check_refine : proc 0 step 2 jface 4<br>
amr_check_refine : proc 0 waiting jface
4 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 0 step 3 jface 4<br>
amr_check_refine : proc 0 step 4<br>
pe 0 exiting amr_check_refine<br>
pe 1 entered amr_check_refine<br>
amr_check_refine : proc 1 step 1 refine(1:lnblocks)
<br>
amr_check_refine : proc 1 step 2 jface 1<br>
amr_check_refine : proc 1 waiting jface
1 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 1 step 3 jface 1<br>
amr_check_refine : proc 1 step 2 jface 2<br>
amr_check_refine : proc 1 waiting jface
2 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 1 step 3 jface 2<br>
amr_check_refine : proc 1 step 2 jface 3<br>
amr_check_refine : proc 1 waiting jface
3 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 1 step 3 jface 3<br>
amr_check_refine : proc 1 step 2 jface 4<br>
amr_check_refine : proc 1 waiting jface
4 testt nrecv 0 nsend 0<br>
amr_check_refine : proc 1 step 3 jface 4<br>
amr_check_refine : proc 1 step 4<br>
pe 1 exiting amr_check_refine<br>
iteration, no. not moved = 0 0<br>
message sizes 1 cc/nc/fc/ec 0
0 0 0<br>
message sizes 2 cc/nc/fc/ec 0
0 0 0<br>
message sizes 3 cc/nc/fc/ec 0
0 0 0<br>
message sizes 4 cc/nc/fc/ec 0
0 0 0<br>
message sizes 5 cc/nc/fc/ec 0
0 0 0<br>
message sizes 6 cc/nc/fc/ec 0
0 0 0<br>
message sizes 7 cc/nc/fc/ec 0
0 0 0<br>
message sizes 8 cc/nc/fc/ec 0
0 0 0<br>
message sizes 9 cc/nc/fc/ec 0
0 0 0<br>
message sizes 10 cc/nc/fc/ec 16
25 40 40<br>
message sizes 11 cc/nc/fc/ec 400
505 904 904<br>
message sizes 12 cc/nc/fc/ec 16
25 40 40<br>
message sizes 13 cc/nc/fc/ec 400
505 904 904<br>
message sizes 14 cc/nc/fc/ec 10000
10201 20200 20200<br>
message sizes 15 cc/nc/fc/ec 400
505 904 904<br>
message sizes 16 cc/nc/fc/ec 16
25 40 40<br>
message sizes 17 cc/nc/fc/ec 400
505 904 904<br>
message sizes 18 cc/nc/fc/ec 16
25 40 40<br>
message sizes 19 cc/nc/fc/ec 0
0 0 0<br>
message sizes 20 cc/nc/fc/ec 0
0 0 0<br>
message sizes 21 cc/nc/fc/ec 0
0 0 0<br>
message sizes 22 cc/nc/fc/ec 0
0 0 0<br>
message sizes 23 cc/nc/fc/ec 0
0 0 0<br>
message sizes 24 cc/nc/fc/ec 0
0 0 0<br>
message sizes 25 cc/nc/fc/ec 0
0 0 0<br>
message sizes 26 cc/nc/fc/ec 0
0 0 0<br>
message sizes 1 cc/nc/fc/ec 0
0 0 0<br>
message sizes 2 cc/nc/fc/ec 0
0 0 0<br>
message sizes 3 cc/nc/fc/ec 0
0 0 0<br>
message sizes 4 cc/nc/fc/ec 0
0 0 0<br>
message sizes 5 cc/nc/fc/ec 0
0 0 0<br>
message sizes 6 cc/nc/fc/ec 0
0 0 0<br>
message sizes 7 cc/nc/fc/ec 0
0 0 0<br>
message sizes 8 cc/nc/fc/ec 0
0 0 0<br>
message sizes 9 cc/nc/fc/ec 0
0 0 0<br>
message sizes 10 cc/nc/fc/ec 16
25 40 40<br>
message sizes 11 cc/nc/fc/ec 400
505 904 904<br>
message sizes 12 cc/nc/fc/ec 16
25 40 40<br>
message sizes 13 cc/nc/fc/ec 400
505 904 904<br>
message sizes 14 cc/nc/fc/ec 10000
10201 20200 20200<br>
message sizes 15 cc/nc/fc/ec 400
505 904 904<br>
message sizes 16 cc/nc/fc/ec 16
25 40 40<br>
message sizes 17 cc/nc/fc/ec 400
505 904 904<br>
message sizes 18 cc/nc/fc/ec 16
25 40 40<br>
message sizes 19 cc/nc/fc/ec 0
0 0 0<br>
message sizes 20 cc/nc/fc/ec 0
0 0 0<br>
message sizes 21 cc/nc/fc/ec 0
0 0 0<br>
message sizes 22 cc/nc/fc/ec 0
0 0 0<br>
message sizes 23 cc/nc/fc/ec 0
0 0 0<br>
message sizes 24 cc/nc/fc/ec 0
0 0 0<br>
message sizes 25 cc/nc/fc/ec 0
0 0 0<br>
message sizes 26 cc/nc/fc/ec 0
0 0 0<br>
message sizes 27 cc/nc/fc/ec 0
0 0 0<br>
message sizes 28 cc/nc/fc/ec 0
0 0 0<br>
message sizes 29 cc/nc/fc/ec 0
0 0 0<br>
message sizes 30 cc/nc/fc/ec 0
0 0 0<br>
message sizes 31 cc/nc/fc/ec 0
0 0 0<br>
message sizes 32 cc/nc/fc/ec 0
0 0 0<br>
message sizes 33 cc/nc/fc/ec 0
0 0 0<br>
message sizes 34 cc/nc/fc/ec 0
0 0 0<br>
message sizes 27 cc/nc/fc/ec 0
0 0 0<br>
message sizes 28 cc/nc/fc/ec 0
0 0 0<br>
message sizes 29 cc/nc/fc/ec 0
0 0 0<br>
message sizes 30 cc/nc/fc/ec 0
0 0 0<br>
message sizes 31 cc/nc/fc/ec 0
0 0 0<br>
message sizes 32 cc/nc/fc/ec 0
0 0 0<br>
message sizes 33 cc/nc/fc/ec 0
0 0 0<br>
message sizes 34 cc/nc/fc/ec 0
0 0 0<br>
message sizes 35 cc/nc/fc/ec 0
0 0 0<br>
message sizes 36 cc/nc/fc/ec 0
0 0 0<br>
message sizes 37 cc/nc/fc/ec 64
81 144 144<br>
message sizes 38 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 39 cc/nc/fc/ec 64
81 144 144<br>
message sizes 35 cc/nc/fc/ec 0
0 0 0<br>
message sizes 36 cc/nc/fc/ec 0
0 0 0<br>
message sizes 37 cc/nc/fc/ec 64
81 144 144<br>
message sizes 38 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 39 cc/nc/fc/ec 64
81 144 144<br>
message sizes 40 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 41 cc/nc/fc/ec 10000
10201 20200 20200<br>
message sizes 42 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 43 cc/nc/fc/ec 64
81 144 144<br>
message sizes 44 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 45 cc/nc/fc/ec 64
81 144 144<br>
message sizes 46 cc/nc/fc/ec 0
0 0 0<br>
message sizes 47 cc/nc/fc/ec 0
0 0 0<br>
message sizes 48 cc/nc/fc/ec 0
0 0 0<br>
message sizes 49 cc/nc/fc/ec 0
0 0 0<br>
message sizes 50 cc/nc/fc/ec 0
0 0 0<br>
message sizes 51 cc/nc/fc/ec 0
0 0 0<br>
message sizes 52 cc/nc/fc/ec 0
0 0 0<br>
message sizes 53 cc/nc/fc/ec 0
0 0 0<br>
message sizes 54 cc/nc/fc/ec 0
0 0 0<br>
pe 0 nprocs 2 start packing<br>
pe 0 irpe 1 commatrix_send 0<br>
pe 0 irpe 2 commatrix_send 3<br>
pe 0 :pack for rempe 2 in buffer
layer 1 blk 1 from local lb
1 dtype 14 index 1 buf_dim 5955<br>
pe 0 :pack for rempe 2 in buffer
layer 1 blk 2 from local lb
2 dtype 14 index 40079 buf_dim 5955<br>
pe 0 :pack for rempe 2 in buffer
layer 1 blk 3 from local lb
3 dtype 14 index 80157 buf_dim 5955<br>
pe 0 iblk 1 unpacking starting at
index 1 buf_dim 0<br>
put_buffer : pe 0 index on entry 4<br>
put_buffer : pe 0 index update for cc 40004
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 0 tree info unpacked into
block 1<br>
pe 0 iblk 1 unpacked into 1<br>
message sizes 40 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 41 cc/nc/fc/ec 10000
10201 20200 20200<br>
message sizes 42 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 43 cc/nc/fc/ec 64
81 144 144<br>
message sizes 44 cc/nc/fc/ec 800
909 1708 1708<br>
message sizes 45 cc/nc/fc/ec 64
81 144 144<br>
message sizes 46 cc/nc/fc/ec 0
0 0 0<br>
message sizes 47 cc/nc/fc/ec 0
0 0 0<br>
message sizes 48 cc/nc/fc/ec 0
0 0 0<br>
message sizes 49 cc/nc/fc/ec 0
0 0 0<br>
message sizes 50 cc/nc/fc/ec 0
0 0 0<br>
message sizes 51 cc/nc/fc/ec 0
0 0 0<br>
message sizes 52 cc/nc/fc/ec 0
0 0 0<br>
message sizes 53 cc/nc/fc/ec 0
0 0 0<br>
message sizes 54 cc/nc/fc/ec 0
0 0 0<br>
pe 1 nprocs 2 start packing<br>
pe 1 irpe 1 commatrix_send 2<br>
pe 1 :pack for rempe 1 in buffer
layer 1 blk 1 from local lb
1 dtype 14 index 1 buf_dim -1102110160<br>
pe 1 :pack for rempe 1 in buffer
layer 1 blk 2 from local lb
2 dtype 14 index 40079 buf_dim -1102110160<br>
pe 1 irpe 2 commatrix_send 0<br>
pe 1 iblk 1 unpacking starting at
index 1 buf_dim 0<br>
put_buffer : pe 1 index on entry 4<br>
put_buffer : pe 1 index update for cc 40004
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 1 tree info unpacked into
block 1<br>
pe 1 iblk 1 unpacked into 1<br>
pe 1 iblk 2 unpacking starting at
index 40079 buf_dim 0<br>
put_buffer : pe 1 index on entry 40082<br>
put_buffer : pe 1 index update for cc 80082
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 1 tree info unpacked into
block 2<br>
pe 1 iblk 2 unpacked into 2<br>
pe 0 iblk 2 unpacking starting at
index 40079 buf_dim 0<br>
put_buffer : pe 0 index on entry 40082<br>
put_buffer : pe 0 index update for cc 80082
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 0 tree info unpacked into
block 2<br>
pe 0 iblk 2 unpacked into 2<br>
pack_blocks : pe 0 lcc lfc lec lnc T F F F
lguard_in_progress F iopt 1
ngcell_on_cc 4<br>
pack_blocks : pe 0 loc_message_size(14)
40078<br>
pack_blocks : pe 0 loc_message_size(17)
1678<br>
pe 0 sizing send buf to pe 2 adding
message type 14 size 40078 accumulated
size 40078 invar 4 message_size_cc
10000 ibndvar 0 message_size_fc 20200
ivaredge 0 message_size_ec 20200
ivarcorn 0 message_size_nc 10201
offset 75<br>
pe 0 sizing send buf to pe 2 adding
message type 14 size 40078 accumulated
size 80156 invar 4 message_size_cc
10000 ibndvar 0 message_size_fc 20200
ivaredge 0 message_size_ec 20200
ivarcorn 0 message_size_nc 10201
offset 75<br>
pe 0 sizing send buf to pe 2 adding
message type 14 size 40078 accumulated
size 120234 invar 4 message_size_cc
10000 ibndvar 0 message_size_fc 20200
ivaredge 0 message_size_ec 20200
ivarcorn 0 message_size_nc 10201
offset 75<br>
pe 0 tot_no_blocks_to_be_received 2<br>
pe 0 sizing recv buf from pe 2 adding
message type 14 size 40078 accumulated
size 40078 iseg 1
mess_segment_loc 1 lindex 40078<br>
pe 0 sizing recv buf from pe 2 adding
message type 14 size 40078 accumulated
size 80156 iseg 2 mess_segment_loc
40079 lindex 80156<br>
pe 0 nprocs 2 start packing<br>
pe 0 irpe 1 commatrix_send 0<br>
pe 0 irpe 2 commatrix_send 3<br>
pe 0 :pack for rempe 2 in buffer
layer 1 blk 1 from local lb
1 dtype 14 index 1 buf_dim 120235<br>
pe 1 iblk 3 unpacking starting at
index 80157 buf_dim 0<br>
put_buffer : pe 1 index on entry 80160<br>
put_buffer : pe 1 index update for cc 120160
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 1 tree info unpacked into
block 3<br>
pe 1 iblk 3 unpacked into 3<br>
pack_blocks : pe 1 lcc lfc lec lnc T F F F
lguard_in_progress F iopt 1
ngcell_on_cc 4<br>
pack_blocks : pe 1 loc_message_size(14)
40078<br>
pack_blocks : pe 1 loc_message_size(17)
1678<br>
pe 1 sizing send buf to pe 1 adding
message type 14 size 40078 accumulated
size 40078 invar 4 message_size_cc
10000 ibndvar 0 message_size_fc 20200
ivaredge 0 message_size_ec 20200
ivarcorn 0 message_size_nc 10201
offset 75<br>
pe 1 sizing send buf to pe 1 adding
message type 14 size 40078 accumulated
size 80156 invar 4 message_size_cc
10000 ibndvar 0 message_size_fc 20200
ivaredge 0 message_size_ec 20200
ivarcorn 0 message_size_nc 10201
offset 75<br>
pe 1 tot_no_blocks_to_be_received 3<br>
pe 1 sizing recv buf from pe 1 adding
message type 14 size 40078 accumulated
size 40078 iseg 1
mess_segment_loc 1 lindex 40078<br>
pe 1 sizing recv buf from pe 1 adding
message type 14 size 40078 accumulated
size 80156 iseg 2 mess_segment_loc
40079 lindex 80156<br>
pe 1 sizing recv buf from pe 1 adding
message type 14 size 40078 accumulated
size 120234 iseg 3 mess_segment_loc
80157 lindex 120234<br>
pe 1 nprocs 2 start packing<br>
pe 1 irpe 1 commatrix_send 2<br>
pe 1 :pack for rempe 1 in buffer
layer 1 blk 1 from local lb
1 dtype 14 index 1 buf_dim 80157<br>
pe 0 :pack for rempe 2 in buffer
layer 1 blk 2 from local lb
2 dtype 14 index 40079 buf_dim 120235<br>
pe 1 :pack for rempe 1 in buffer
layer 1 blk 2 from local lb
2 dtype 14 index 40079 buf_dim 80157<br>
pe 0 :pack for rempe 2 in buffer
layer 1 blk 3 from local lb
3 dtype 14 index 80157 buf_dim 120235<br>
pe 1 irpe 2 commatrix_send 0<br>
pe 0 lblk 1 unpacking starting at
index 1 buf_dim 80157<br>
put_buffer : pe 0 index on entry 4<br>
put_buffer : pe 0 index update for cc 40004
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 0 tree info unpacked into
block 22<br>
pe 1 lblk 1 unpacking starting at
index 1 buf_dim 120235<br>
put_buffer : pe 1 index on entry 4<br>
put_buffer : pe 1 index update for cc 40004
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 1 tree info unpacked into
block 21<br>
pe 1 lblk 1 unpacked into 21<br>
pe 0 lblk 1 unpacked into 22<br>
pe 0 lblk 2 unpacking starting at
index 40079 buf_dim 80157<br>
put_buffer : pe 0 index on entry 40082<br>
put_buffer : pe 0 index update for cc 80082
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 0 tree info unpacked into
block 23<br>
pe 1 lblk 2 unpacking starting at
index 40079 buf_dim 120235<br>
put_buffer : pe 1 index on entry 40082<br>
put_buffer : pe 1 index update for cc 80082
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
put_buffer : pe 1 tree info unpacked into
block 22<br>
pe 1 lblk 2 unpacked into 22<br>
pe 1 lblk 3 unpacking starting at
index 80157 buf_dim 120235<br>
put_buffer : pe 1 index on entry 80160<br>
put_buffer : pe 1 index update for cc 120160
invar 4 ia ib ja jb ka kb 5
104 5 104 1 1
dtype 14<br>
pe 0 lblk 2 unpacked into 23<br>
put_buffer : pe 1 tree info unpacked into
block 23<br>
pe 1 lblk 3 unpacked into 23</p>
<p><br>
</p>
<p><b> Paramesh error : pe 1 pe address of required
data is not in the list of communicating pes.
remote_block 21 remote_pe 1
rem_pe 0 laddress 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 1
0 2 0 3 0
0 0</b><br>
--------------------------------------------------------------------------<br>
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD<br>
with errorcode 0.<br>
<br>
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI
processes.<br>
You may or may not see output from other processes, depending on<br>
exactly when Open MPI kills them.<br>
--------------------------------------------------------------------------<br>
</p>
</div>
</body>
</html>