[Users] optimal nodes, mpi tasks, and cpus per mpi task?
Bill Gabella
b.gabella at vanderbilt.edu
Fri Apr 3 12:12:09 CDT 2020
Okay just a little advice from the community, running a qc0 BBH merger
example on our HPC at Vanderbilt and trying to tune my parameters. WE
use Slurm / SBATCH and the ones I am working with are:
o amount of memory per node
o number of nodes
o numpy of MPI tasks
o if more than one cpu per task helps
One example was 4 GB per node, 4 nodes, 4 MPI tasks, and 2 CPUs per
task. Was a little slower than 1 CPU per task---perhaps memory. I have
learned from you all that ETK is a memory hog more than a CPU hog, ur,
to say lots of memory helps vs lots of cores.
Any examples you have of simple, vanilla BBH would be good. My SBATCH
script includes
#SBATCH --mem 4000 # amount of memory per node
#SBATCH --nodes=4 # Like -24, number of nodes on which to run
#SBATCH -n 4 # Total number of mpi tasks requested, def 1
task/1 cpu
#SBATCH -c 2 # Number of cpus per mpi task
#SBATCH -t 1:00:00 # Run time (d-hh:mm:ss)
and was told to use srun and not mpirun -np XXXX,
myparFile="qc0-mclachlan.par"
myCactusExe="/labs/einstein/20191028/Cactus/exe/cactus_sim"
##echo "mpirun -np 4 $myCactusExe $myparFile"
##mpirun -np 4 $myCactusExe $myparFile
echo "srun $myCactusExe $myparFile"
srun $myCactusExe $myparFile
Eventually I want to get Simfactory running and see several HPC's that
use Slurm in the machine database directory. Still I think I need to
understand these parameters to tune the scripts Simfactory uses.
thanks, bill e.g.
--
=====================================
William Gabella
Research Assistant Professor
Department of Physics and Astronomy
Vanderbilt University
Nashville, TN USA
b.gabella at vanderbilt.edu
(o) 615-343-2713
More information about the Users
mailing list