[FLASH-USERS] MORE nodes MORE time on HPC

赵旭 xuzhao1994 at sjtu.edu.cn
Fri Jun 17 09:26:45 EDT 2022


Hi Ryan,

Thank you for you reply.

Please find attached log file for case 1) - 1node.log and case 2) - 2nodes.log.

The setup flag is 

./setup -auto $SM_dir -2d +cylindrical -nxb=16 -nyb=16 -maxblocks=1000 +hdf5typeio species=Cham,Fuel,Cone +mtmmmt +laser +pm4dev +uhd3t +mgd mgd_meshgroups=20 -objdir=~/zx/FLASH4.6.2/data/$Data_dir -parfile=$par_dir

and the .par is also attached 

is is the AMR mesh caused the problem?    

> "(e.g., if you've written a lot of MPI_BCAST or MPI_ALL_REDUCE calls) "

I dont write anything to the source code, I only changed run parameters for targets and lasers, etc.

Best,

Zhao Xu

----- 原始邮件 -----
发件人: "Ryan Farber" <rjfarber at umich.edu>
收件人: "赵旭" <xuzhao1994 at sjtu.edu.cn>
抄送: "flash-users" <flash-users at flash.rochester.edu>
发送时间: 星期五, 2022年 6 月 17日 下午 5:31:46
主题: Re: [FLASH-USERS] MORE nodes MORE time on HPC

Hi Zhao,

I'm having some trouble understanding exactly the cases you're comparing.
If you attach logfiles for each case, that should clear things up.

More generally, using more than one node / more processors increases the
communication time so if your problem doesn't scale well (e.g., if you've
written a lot of MPI_BCAST or MPI_ALL_REDUCE calls) then using more
processors can result in a slower solution time.

Best,
--------
Ryan


On Fri, Jun 17, 2022 at 7:19 AM 赵旭 <xuzhao1994 at sjtu.edu.cn> wrote:

> Dear all,
>
> sorry about the typo,
>
> '' the results show that in case 2) it takes double or even more time than
> 1) ''
>
> that is when i use more than 1 node, it spends more time.
>
>
> ----- 原始邮件 -----
> 发件人: "赵旭" <xuzhao1994 at sjtu.edu.cn>
> 收件人: "flash-users" <flash-users at flash.rochester.edu>
> 发送时间: 星期五, 2022年 6 月 17日 上午 11:39:48
> 主题: [FLASH-USERS] MORE nodes MORE time on HPC
>
> Dear FLASH user & developers,
>
> I have a question about running FLASH code on HPC. I am running a modified
> laserslab case (changed .par and  initBlock.F90 for laser and target) from
> the default one. I tried that
> 1) running on 1 nodes with 40 cores  (1 nodes contains 40 cores) and
> 2) running on 2 nodes with 80 cores with the same setup and parameters in
> 1)
>
> the results show that in case 1) it takes double or even more time than
> 1), and this seems counter intuitive. Because if I run a case with larger
> simulation box or with fine resolution I have to using more cores.
>
> I tried two HPC, as a) 1 nodes = 40 cores with total 192G memory, and b) 1
> nodes = 64 cores total 512G memory. It takes 2 times in 2 nodes in a) and
> nearly 5 times in 2 nodes in b)
>
> I dont know if this problem comes from the setting related to HPC (like
> mpi and hypre version , or job systerm) or setting related to FLASH
> code(like in some source code files)
>
> I use gcc 7.5, python3.8, mpich 3.3.2, hypre 2.11.2, hdf5 1.10.5.
>
> both HPCs use slurm job systerm and like below
>
>   #!/bin/bash
>
>   #SBATCH --job-name=         # Name
>   #SBATCH --partition=64c512g               # cpu
>   #SBATCH -n 128                       # total cpu
>   #SBATCH --ntasks-per-node=64          # cpu/node
>   #SBATCH --output=%j.out
>   #SBATCH --error=%j.err
>
>   mpirun ./flash4 >laser_slab.log
>
> I would appreciate any help.
>
> Thanks !
>
> --
> Zhao Xu
> Laboratory for Laser Plasmas (MoE)
> Shanghai Jiao Tong University
> 800 Dongchuan Rd, Shanghai 200240
> _______________________________________________
> flash-users mailing list
> flash-users at flash.rochester.edu
>
> For list info, including unsubscribe:
> https://flash.rochester.edu/mailman/listinfo/flash-users
> --
> Zhao Xu
> Laboratory for Laser Plasmas (MoE)
> Shanghai Jiao Tong University
> 800 Dongchuan Rd, Shanghai 200240
> _______________________________________________
> flash-users mailing list
> flash-users at flash.rochester.edu
>
> For list info, including unsubscribe:
> https://flash.rochester.edu/mailman/listinfo/flash-users
>
-- 
Zhao Xu
Laboratory for Laser Plasmas (MoE)
Shanghai Jiao Tong University
800 Dongchuan Rd, Shanghai 200240
-------------- next part --------------
A non-text attachment was scrubbed...
Name: flash.par
Type: application/octet-stream
Size: 8785 bytes
Desc: not available
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20220617/a1383962/attachment-0001.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 1node.log
Type: text/x-log
Size: 565459 bytes
Desc: not available
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20220617/a1383962/attachment-0002.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 2nodes.log
Type: text/x-log
Size: 562667 bytes
Desc: not available
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20220617/a1383962/attachment-0003.bin>


More information about the flash-users mailing list