[ET Trac] #2882: Running BBH with CarpetX with CPUs: High memory consumption and low performance

Alejandra Gonzalez trac-noreply at einsteintoolkit.org
Thu Aug 7 07:16:22 CDT 2025


#2882: Running BBH with CarpetX with CPUs: High memory consumption and low performance

 Reporter: Alejandra Gonzalez
   Status: new
Milestone: 
  Version: 
     Type: bug
 Priority: major
Component: CarpetX

Hello,  
This ticket is due to concerns regarding the performance of CarpetX running **with CPUs** in the new machine MareNostrum 5.

The first issue I had was the memory needed to run a q=1 configuration, coming across several out-of-memory errors \(regardless of the number of nodes requested\). The standard nodes in MN5 have 2GB per core \(112 cores per node\), so I had to switch to the high memory ones to be able to make it run \(this is the line \`#SBATCH --constraint=highmem\` in the attached batch file\). Is this also the case for other machines?  
  
The second issue is that now that is running \(`ncells=128` and 1 node\) it’s going awfully slow ~0.07 M/hr, less than I would expect from using one node. These type of binary configurations we have been running them in 2~4 nodes reaching speeds of around 4~7 M/hr with Carpet and at higher resolution.   
  
Attached you find the parfile, batch file, and output.
attachment: q1.__0._0._0.8__0._0._0.8__e0.5.par (https://api.bitbucket.org/2.0/repositories/einsteintoolkit/tickets/issues/2882/attachments/q1.__0._0._0.8__0._0._0.8__e0.5.par)
attachment: output.log (https://api.bitbucket.org/2.0/repositories/einsteintoolkit/tickets/issues/2882/attachments/output.log)
attachment: run_cactus.sh (https://api.bitbucket.org/2.0/repositories/einsteintoolkit/tickets/issues/2882/attachments/run_cactus.sh)


--
Ticket URL: https://bitbucket.org/einsteintoolkit/tickets/issues/2882/running-bbh-with-carpetx-with-cpus-high
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.einsteintoolkit.org/pipermail/trac/attachments/20250807/b9f8f885/attachment.htm>


More information about the Trac mailing list