[Users] Issue with black hole binary evolution employing IGM

Erik Schnetter schnetter at gmail.com
Thu Sep 15 08:30:24 CDT 2022


Federico

Thanks for including the output, that is helpful.

There are parameters "Carpet::verbose" and "Carpet::veryverbose". You
can set them to "yes" and recover from a checkpoint. This gives more
information about what the code is doing, and thus where it crashes.

The output you attached is only from the first MPI process. Other
processes' output might contain a clue. You can add the command line
option "-roe" to Cactus when you run the simulation. This will collect
output from all processes.

-erik

On Thu, Sep 15, 2022 at 9:20 AM Federico Cattorini
<f.cattorini at campus.unimib.it> wrote:
>
> Hello everyone,
>
> I am experiencing some issue in a number of GRMHD simulations of black hole binaries employing IllinoisGRMHD.
>
> As an example, I will write about an unequal-mass BHB configuration (with q = 2) that I'm running.
>
> After approximately ten orbits, the run stops with no error codes or any other message that could help me identify the issue. The last lines of the standard output are
>
> INFO (IllinoisGRMHD): ***** Iter. # 353949, Lev: 9, Integrating to time: 3.160260e+03 *****
> INFO (IllinoisGRMHD): C2P: Lev: 9 NumPts= 569160 | Fixes: Font= 393 VL= 179 rho*= 2 | Failures: 0 InHoriz= 0 / 0 | Error: 7.124e-02, ErrDenom: 4.838e+13 | 4.51 iters/gridpt
> INFO (IllinoisGRMHD): ***** Iter. # 353949, Lev: 9, Integrating to time: 3.160269e+03 *****
> Simfactory Done at date: gio 04 ago 2022 11:43:01 CEST
>
>
>
> I tried restarting my simulation from the latest checkpoint, but the same sudden stop occurred at the same timestep.
>
> At first, I thought about some problem with IGM. The last INFO is printed by IllinoisGRMHD_driver_evaluate_MHD_rhs.C, so I put some prints in it to identify the spot where the error occurs.
> Unfortunately, I drew a blank, since the stop seems to occur just after the end of IllinoisGRMHD_driver_evaluate_MHD_rhs:
>
> INFO (IllinoisGRMHD): ***** line 52: entering IllinoisGRMHD_driver_evaluate_MHD_rhs *****
> INFO (IllinoisGRMHD): ***** Iter. # 353949, Lev: 10, Integrating to time: 3.160251e+03 *****
> INFO (IllinoisGRMHD): ***** line 100: IllinoisGRMHD_driver_evaluate_MHD_rhs *****
> INFO (IllinoisGRMHD): ***** line 204: just before reconstruct_set_of_prims_PPM *****
> INFO (IllinoisGRMHD): ***** DEBUG END of  IllinoisGRMHD_driver_evaluate_MHD_rhs *****
> Simfactory Done at date: gio 04 ago 2022 19:44:55 CEST
>
>
> I tried to restart the simulation and run it on pure MPI. It ran for a few more iterations, then stopped as well:
>
> INFO (IllinoisGRMHD): ***** line 52: entering IllinoisGRMHD_driver_evaluate_MHD_rhs *****
> INFO (IllinoisGRMHD): ***** Iter. # 353565, Lev: 10, Integrating to time: 3.156831e+03 *****
> INFO (IllinoisGRMHD): ***** line 100: IllinoisGRMHD_driver_evaluate_MHD_rhs *****
> INFO (IllinoisGRMHD): ***** line 204: just before reconstruct_set_of_prims_PPM *****
> INFO (IllinoisGRMHD): ***** DEBUG END of  IllinoisGRMHD_driver_evaluate_MHD_rhs *****
> Simfactory Done at date: ven 05 ago 2022 19:00:13 CEST
>
>
> The simulation setup is as follows:
>
>    Allocated:
>       Nodes:                      10
>       Cores per node:             48
>    SLURM setting
>       SLURM_NNODES :  10
>       SLURM_NPROCS :  20
>       SLURM_NTASKS :  20
>       SLURM_CPUS_ON_NODE  :  48
>       SLURM_CPUS_PER_TASK :  24
>       SLURM_TASKS_PER_NODE:  2(x10)
>    Running:
>       MPI processes:              20
>       OpenMP threads per process: 24
>       MPI processes per node:     2.0
>       OpenMP threads per core:    1.0
>       OpenMP threads per node:    48
>
>
> while the pure-MPI setup is
>
>    Allocated:
>       Nodes:                      10
>       Cores per node:             48
>    SLURM setting
>       SLURM_NNODES :  10
>       SLURM_NPROCS :  480
>       SLURM_NTASKS :  480
>       SLURM_CPUS_ON_NODE  :  48
>       SLURM_CPUS_PER_TASK :  1
>       SLURM_TASKS_PER_NODE:  48(x10)
>    Running:
>       MPI processes:              480
>       OpenMP threads per process: 1
>       MPI processes per node:     48.0
>       OpenMP threads per core:    1.0
>       OpenMP threads per node:    48
>
>
> I am using The Lorentz version of ET.
>
> I've had this issue for two binary BH simulations, both unequal-mass with q = 2. My colleague Giacomo Fedrigo experienced the same problem running an equal-mass simulation.
>
> I attach the q = 2 (s_UUmis_Q2) parameter file and the ET config-info file. Also, I attach the st. error and output of my q = 2 run and of Giacomo's run (b1_UUmis_a12b_pol3_r56_gauss_9). The st. outputs were cut for readability reasons.
>
> Can someone please help me with this?
>
> Thanks in advance,
>
> Federico
> _______________________________________________
> Users mailing list
> Users at einsteintoolkit.org
> http://lists.einsteintoolkit.org/mailman/listinfo/users



-- 
Erik Schnetter <schnetter at gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


More information about the Users mailing list