[Users] Using Stampede2 SKX

Joel Giedt giedtj at rpi.edu
Sat Jan 20 11:42:45 CST 2018


Hi, for another application we found that 4-8 mpi ranks per node was necessary in order to saturate network bandwidth. Since the application was network bandwidth limited, this was key to performance. Joel 


Sent from my Samsung Galaxy S8
-------- Original message --------From: James Healy <jchsma at rit.edu> Date: 1/20/18  10:21 AM  (GMT-05:00) To: Einstein Toolkit Users <users at einsteintoolkit.org>, Yosef Zlochower <yosef at astro.rit.edu>, Carlos Lousto <lousto at astro.rit.edu> Subject: [Users] Using Stampede2 SKX 

    Hello all,
    I am trying to run on the new skylake processors on Stampede2 and
      while the run speeds we are obtaining are very good, we are
      concerned that we aren't optimizing properly when it comes to
      OpenMP.  For instance, we see the best speeds when we use 8 MPI
      processors per node (with 6 threads each for a total of 48 total
      threads/node).  Based on the architecture, we were expecting to
      see the best speeds with 2 MPI/node.  Here is what I have tried:
    
      Using the simfactory files for stampede2-skx (config file, run
        and submit scripts, and modules loaded) I compiled a version of
        ET_2017_06 using LazEv (RIT's evolution thorn) and McLachlan and
        submitted a series of runs that change both the number of nodes
        used, and how I distribute the 48 threads/node between MPI
        processes.

      
      I use a standard low resolution grid, with no IO or
        regridding.  Parameter file attached.
      Run speeds are measured from Carpet::physical_time_per_hour at
        iteration 256. 

      
      I tried both with and without hwloc/SystemTopology.

      
      For both McLachlan and LazEv, I see similar results, with 2
        MPI/node giving the worst results (see attached plot for
        McLachlan) and a slight preferences for 8 MPI/node.

      
    
    So my questions are:
    
      Has there been any tests run by any other users on stampede2
        skx?

      
      Should we expect 2 MPI/node to be the optimal choice? 

      
      If so, are there any other configurations we can try that
        could help optimize?
    
    Thanks in advance!
    Jim Healy
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20180120/b3f62bef/attachment.html 


More information about the Users mailing list