[Users] Einstein toolkit with Sun Grid Engine
Chris Stevens
chris.stevens at canterbury.ac.nz
Thu Oct 7 15:03:46 CDT 2021
Hi Roland,
That's fantastic, thanks for linking those files.
It works as expected with only MPI processes. I am careful in compiling and running with the same (and only) OpenMPI installation on the cluster, so this should be OK.
Finding a Slurm to SGE conversion table, there is no SGE equivalent to ncpus-per-task from Slurm, rather it is the allocation type of the given parallel environment that does this. I.e. the backend.
https://srcc.stanford.edu/sge-slurm-conversion
Further, in the submit script of ranger, the crucial line
#$ -pe @(@PPN_USED@/@NUM_THREADS@)@way @PROCS_REQUESTED@
shows that you request @PROCS_REQUESTED@ slots (as I currently have) and the first argument shows that the name of the parallel environment is dependent upon @NUM_THREADS at . From what I take from this, I need to set up a parallel environment that has hardcoded the number of threads I want per MPI process and then use that parallel environment. I'll see how I go there, but it isn't initially obvious how to do this!
Cheers,
Chris
[cid:0c1cc840-00e9-4a0e-8fb9-b236c5d87518]
[cid:405bc2f5-bdfd-4fa5-b8e3-2c9b0524bb60]
Dr Chris Stevens
Lecturer in Applied Mathematics
Rm 602, Jack Erskine building
School of Mathematics and Statistics
T: +64 3 369 0396 (Internal 90396)
University of Canterbury | Te Whare Wānanga o Waitaha
Private Bag 4800, Christchurch 8140, New Zealand
http://www.chrisdoesmaths.com<http://www.chrisdoesmaths.com/>
Director
SCRI Ltd
http://www.scri.co.nz<http://www.scri.co.nz/>
________________________________
From: Roland Haas
Sent: Thursday, October 07, 2021 06:22
To: Chris Stevens
Cc: users at einsteintoolkit.org
Subject: Re: [Users] Einstein toolkit with Sun Grid Engine
Hello Chris,
We used SGE a long time ago on some of the TACC machines.
You can find an old setup for TACC's Ranger cluster in an old commit
like so:
git checkout fed9f8d6fae4c52ed2d0a688fcc99e51b94e608e
and then look at the "ranger" files in OUTDATED subdirectories of
machines, runscripts, submitscripts.
Having all MPI ranks on a single node might also be caused by using
different MPI stacks when compiling and when running so you must make
sure that the "mpirun" (or equivalent command) you use is the one that
belongs to the MPI library that you used when linking your code.
Finally you may also have to check if this is an issue with threads and
MPI ranks. Ie I would check if things are still wrong if you use only
MPI processes and no OpenMP threads at all (in that case you would have
to check what SGE counts: threads (cores) or MPI ranks (processes)).
Yours,
Roland
> Hi everyone,
>
> I have set up the Einstein toolkit on a local cluster of 20 nodes with the SGE scheduler. I have not seen any examples of this scheduler being used with the Einstein toolkit.
>
> I have managed to get it working; however it seems if I ask for a certain number of slots that requires more than one node, it correctly allocates these, however all processes and threads are run on the one node and is oversubscribed.
>
> My question is whether anybody has used SGE with the Einstein toolkit and if this is a good thing or not? If it is possible, I can send more details if there are people willing to help solve this inter-node communication problem.
>
> Thanks in advance,
>
> Chris
>
> [cid:29d54967-59c8-486e-adea-80af7ce2cc49]
>
>
> [cid:55ebbbb5-1e12-45a2-8d51-206c70460c36]
>
>
>
> Dr Chris Stevens
>
> Lecturer in Applied Mathematics
>
> Rm 602, Jack Erskine building
>
> School of Mathematics and Statistics
>
> T: +64 3 369 0396 (Internal 90396)
>
> University of Canterbury | Te Whare Wānanga o Waitaha
>
> Private Bag 4800, Christchurch 8140, New Zealand
>
> https://urldefense.com/v3/__http://www.chrisdoesmaths.com__;!!DZ3fjg!rvExVfoK3iWdskfjDNUxwMCUktw9L_Wt8NTaikC7HLu245hE370Ok_JYsZduIoBu$ <https://urldefense.com/v3/__http://www.chrisdoesmaths.com/__;!!DZ3fjg!rvExVfoK3iWdskfjDNUxwMCUktw9L_Wt8NTaikC7HLu245hE370Ok_JYsfTVv_dN$ >
>
>
> Director
> SCRI Ltd
> https://urldefense.com/v3/__http://www.scri.co.nz__;!!DZ3fjg!rvExVfoK3iWdskfjDNUxwMCUktw9L_Wt8NTaikC7HLu245hE370Ok_JYsaY3VCkl$ <https://urldefense.com/v3/__http://www.scri.co.nz/__;!!DZ3fjg!rvExVfoK3iWdskfjDNUxwMCUktw9L_Wt8NTaikC7HLu245hE370Ok_JYsSEV4xVt$ >
>
--
My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from http://pgp.mit.edu .
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20211007/512ab54f/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-tnb5nb1v.png
Type: image/png
Size: 13436 bytes
Desc: Outlook-tnb5nb1v.png
Url : http://lists.einsteintoolkit.org/pipermail/users/attachments/20211007/512ab54f/attachment-0002.png
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-qeafgbn2.png
Type: image/png
Size: 17337 bytes
Desc: Outlook-qeafgbn2.png
Url : http://lists.einsteintoolkit.org/pipermail/users/attachments/20211007/512ab54f/attachment-0003.png
More information about the Users
mailing list