[Users] Segmentation fault in TOV test

Rhiannon Silva SilvaRL at cardiff.ac.uk
Thu Jul 25 22:23:59 CDT 2024


Hi,

I've managed to compile ET after switching to gnu compilers and openmpi, but I'm getting this warning:

/usr/bin/ld: warning: libgfortran.so.3, needed by /usr/lib64/../lib64/liblapack.so, may conflict with libgfortran.so.5


The code passed the Hello World test but when I run the TOV test I get a segmentation fault, which seems to happen when it's testing vectorisation:

Rank 0 with PID 139005 received signal 11
Writing backtrace to tov_ET/backtrace.0.txt
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 139005 on node ccs0135 exited on signal 11 (Segmentation fault).
---

I also get this at the start of the  .out file:

[1721961556.411313] [ccs0135:139005:0]    ucp_context.c:1529 UCX  WARN  UCP version is incompatible, required: 1.11, actual: 1.9 (release 0 /lib64/libucp.so.0)


I'm also wondering about another message I get:
"This OpenMPI build is integrated with Slurm. Use 'srun' to launch in a Slurm job rather than 'mpirun'."


When I tried to use srun in the run script, I had problems, so I changed it back to mpirun. Not sure if that is causing a problem?

Here is what happened when I used srun:

[ccs0001:248227] mca_base_component_repository_open: unable to open mca_pmix_s2: libpmi2.so.0: cannot open shared object file: No such file or directory (ignored)
[ccs0001:248227] mca_base_component_repository_open: unable to open mca_pmix_s1: libpmi.so.0: cannot open shared object file: No such file or directory (ignored)
[ccs0001:248227] OPAL ERROR: Not initialized in file pmix2x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[ccs0001:248227] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
srun: error: ccs0001: task 0: Exited with exit code 1
---


I'm not sure where to start. Is any of the above likely to be causing the segmentation fault?

Thanks,
Rhiannon.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.einsteintoolkit.org/pipermail/users/attachments/20240726/79abff88/attachment.htm>


More information about the Users mailing list