[FLASH-USERS] MORE nodes MORE time on HPC

赵旭 xuzhao1994 at sjtu.edu.cn
Thu Jun 16 23:39:48 EDT 2022


Dear FLASH user & developers,

I have a question about running FLASH code on HPC. I am running a modified laserslab case (changed .par and  initBlock.F90 for laser and target) from the default one. I tried that 
1) running on 1 nodes with 40 cores  (1 nodes contains 40 cores) and 
2) running on 2 nodes with 80 cores with the same setup and parameters in 1)

the results show that in case 1) it takes double or even more time than 1), and this seems counter intuitive. Because if I run a case with larger simulation box or with fine resolution I have to using more cores.

I tried two HPC, as a) 1 nodes = 40 cores with total 192G memory, and b) 1 nodes = 64 cores total 512G memory. It takes 2 times in 2 nodes in a) and nearly 5 times in 2 nodes in b) 

I dont know if this problem comes from the setting related to HPC (like mpi and hypre version , or job systerm) or setting related to FLASH code(like in some source code files)

I use gcc 7.5, python3.8, mpich 3.3.2, hypre 2.11.2, hdf5 1.10.5.

both HPCs use slurm job systerm and like below

  #!/bin/bash
  
  #SBATCH --job-name=         # Name
  #SBATCH --partition=64c512g               # cpu 
  #SBATCH -n 128                       # total cpu 
  #SBATCH --ntasks-per-node=64          # cpu/node
  #SBATCH --output=%j.out
  #SBATCH --error=%j.err 

  mpirun ./flash4 >laser_slab.log

I would appreciate any help.

Thanks !

-- 
Zhao Xu
Laboratory for Laser Plasmas (MoE)
Shanghai Jiao Tong University
800 Dongchuan Rd, Shanghai 200240



More information about the flash-users mailing list