<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
Hi Roland,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
The admins reinstalled openmpi and it now runs the hello script correctly. However, the Toolkit would still produce seg faults after srun. Switching to mvapich seems to have largely done the trick though, as the TOV job is now able to start executing. As
long as there is only 1 MPI process (with however many threads), the TOV job runs to completion correctly. However, anytime there are multiple MPI processes, it crashes at the first time iteration:<br>
<br>
<i>INFO (TOVSolver): Done interpolation.</i>
<div><i>---------------------------------------------------------------------------</i></div>
<div><i>Iteration Time | ADMBASE::alp | HYDROBASE::rho</i></div>
<div><i> | minimum maximum | minimum maximum</i></div>
<div><i>---------------------------------------------------------------------------</i></div>
<div><i> 0 0.000 | 0.6698612 0.9966374 | 1.000000e-10 0.0012800</i></div>
<div><i>Rank 1 with PID 3964893 received signal 11</i></div>
<div><i>Writing backtrace to static_tov/backtrace.1.txt</i></div>
<div><i>srun: error: c40: task 1: Segmentation fault (core dumped)</i></div>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
The backtrace is attached, as well as the last portion of the output, and it looks like the issue is tied to Carpet. Are there some settings in the parameter file that need adjusting or setting to fix this? Or perhaps specific settings for the number of ranks
and threads?<br>
</div>
<div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
Thank you,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
Jessica</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div id="Signature">
<div>
<div name="divtagdefaultwrapper" style="font-family:Calibri,Arial,Helvetica,sans-serif; font-size:; margin:0">
<div><b><br>
</b></div>
<b>Dr. Jessica S. Warren</b>
<div>Physics Lecturer</div>
<div>Indiana University Northwest</div>
<div>warrenjs@iun.edu</div>
</div>
<div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0);">
<br>
<hr tabindex="-1" style="display:inline-block; width:98%;">
<b>From:</b> Roland Haas<br>
<b>Sent:</b> Thursday, August 11, 2022 8:32 AM<br>
<b>To:</b> Warren, Jessica Sawyer<br>
<b>Cc:</b> users@einsteintoolkit.org<br>
<b>Subject:</b> Re: [Users] [External] Re: Running with SLURM
<div><br>
</div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Hello Jessica,<br>
<br>
If you get the same error from hello-world and from Cactus then it<br>
would seem that there is still something off with the MPI stack.<br>
<br>
The -lmpi_cxx option instructs the linker to link in C++ bindings for<br>
MPI though for just the hello world example, it being C code, this is<br>
not required and -lmpi alone is sufficient.<br>
<br>
I would see two options that would let you get running somewhat quickly:<br>
<br>
1. report your issues with OpenMPI and hello-world (including link to<br>
the source code on the web, and the exact command line to compile) to<br>
the admins and ask them for help<br>
<br>
1.5 instead of using gcc to compile for OpenMPI do use the MPI official<br>
compiler wrapper mpicc which would just be:<br>
<br>
mpicc -o hello hello.c<br>
<br>
that is you do not have to pass and library or inlcude options. If this<br>
fails, I would definitely talk to the admins.<br>
<br>
2. compile hello-world using mvapich. For this the easiest way is to<br>
make sure to load the mvapich module and then use the same compiler<br>
wrapper invication to compile:<br>
<br>
mpicc -o hello hello.c<br>
<br>
If 2 works then you can also compile the Einstein Toolkit with mvapich.<br>
You have to make sure to load the correct module before compiling the<br>
toolkit and then ExternalLibraries/MPI should figure out (from the<br>
mpicc wrapper) how to compile the toolkit. <br>
<br>
Yours,<br>
Roland<br>
<br>
<br>
> Hi Roland,<br>
> <br>
> Thank you so much. The compute nodes are able to be used for<br>
> compilation, and the directories match what is listed in<br>
> make.MPI.defn. When doing the 'hello' example you linked to, it was<br>
> unable to compile due to a linker error (/usr/bin/ld: cannot find<br>
> -lmpi_cxx). I re-ran it in verbose mode and found the directory it<br>
> was searching did exist and did have lmpi but not lmpi_cxx. The<br>
> admins said they had had some issues installing openmpi (couldn't<br>
> recall exactly what), and recommended mpavich (since that does have<br>
> lmpicxx installed and is their preferred implementation). However,<br>
> they reinstalled openmpi in an effort to get that to work and it did<br>
> allow the 'hello' script to compile, but when executed it produced:<br>
> <br>
> --------------------------------------------------------------------------<br>
> No OpenFabrics connection schemes reported that they were able to be<br>
> used on a specific port. As such, the openib BTL (OpenFabrics<br>
> support) will be disabled for this port.<br>
> <br>
> Local host: h1<br>
> Local device: mlx5_0<br>
> Local port: 1<br>
> CPCs attempted: rdmacm, udcm<br>
> --------------------------------------------------------------------------<br>
> Hello world from processor h1.quartz.uits.iu.edu, rank 0 out of 1<br>
> processors<br>
> <br>
> Similarly, doing the TOV job via sbatch, after the srun command it<br>
> gave the same OpenFabrics message (for each MPI rank) and then the<br>
> same segmentation faults as before. I've contacted the admins about<br>
> this and am waiting to hear back. Do you have any recommendations -<br>
> perhaps it would be easier to try switching over to mvapich? If so,<br>
> could you point me to some resources on how to reconfigure?<br>
> <br>
> Thank you,<br>
> Jessica<br>
> <br>
> Dr. Jessica S. Warren<br>
> Physics Lecturer<br>
> Indiana University Northwest<br>
> warrenjs@iun.edu<br>
> ________________________________<br>
> From: Roland Haas <rhaas@illinois.edu><br>
> Sent: Tuesday, August 9, 2022 9:48 AM<br>
> To: Warren, Jessica Sawyer <warrenjs@iun.edu><br>
> Cc: users@einsteintoolkit.org <users@einsteintoolkit.org><br>
> Subject: [External] Re: [Users] Running with SLURM<br>
> <br>
> Hello Jessica,<br>
> <br>
> You may also find something useful in the setting up a new machine<br>
> seminar presentation:<br>
> <br>
> <a href="https://urldefense.com/v3/__https://www.einsteintoolkit.org/seminars/2022_02_24/index.html__;!!DZ3fjg!9JAgxc4juluJwklwTQgJGsYLXJIzzdHOqX8rwuiuymRXLrFedDv4PXSatzu0HVAYDfBFpiYxw1_jUDmUew$" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable">
https://urldefense.com/v3/__https://www.einsteintoolkit.org/seminars/2022_02_24/index.html__;!!DZ3fjg!9JAgxc4juluJwklwTQgJGsYLXJIzzdHOqX8rwuiuymRXLrFedDv4PXSatzu0HVAYDfBFpiYxw1_jUDmUew$</a>
<br>
> <br>
> Yours,<br>
> Roland<br>
> <br>
> --<br>
> My email is as private as my paper mail. I therefore support<br>
> encrypting and signing email messages. Get my PGP key from<br>
> <a href="https://urldefense.com/v3/__http://pgp.mit.edu__;!!DZ3fjg!9JAgxc4juluJwklwTQgJGsYLXJIzzdHOqX8rwuiuymRXLrFedDv4PXSatzu0HVAYDfBFpiYxw19et3mEyg$" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable">
https://urldefense.com/v3/__http://pgp.mit.edu__;!!DZ3fjg!9JAgxc4juluJwklwTQgJGsYLXJIzzdHOqX8rwuiuymRXLrFedDv4PXSatzu0HVAYDfBFpiYxw19et3mEyg$</a><br>
> .<br>
<br>
<br>
-- <br>
My email is as private as my paper mail. I therefore support encrypting<br>
and signing email messages. Get my PGP key from <a href="http://pgp.mit.edu" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable">
http://pgp.mit.edu</a> .<br>
</div>
</span></font></div>
</div>
</div>
</div>
</div>
</body>
</html>