[ET Trac] #2861: Running qc0 with CarpetX

Alejandra Gonzalez trac-noreply at einsteintoolkit.org
Mon Mar 10 05:45:12 CDT 2025


#2861: Running qc0 with CarpetX

 Reporter: Alejandra Gonzalez
   Status: open
Milestone: 
  Version: 
     Type: bug
 Priority: major
Component: EinsteinToolkit thorn

Comment (by Alejandra Gonzalez):

Hello Lucas, thank you. For the record I was able to run `qc0` with the thorns:  

```
ActiveThorns = "
    ADMBaseX
    CarpetX
    CoordinatesX
    ErrorEstimator
    Formaline
    IOUtil
    TwoPuncturesX
"
```

with the following output:  

```
INFO (Formaline): Writing tarballs with the Cactus sources into the directory "qc0/cactus-source"
INFO (CarpetX): Setting initial values for max_grid_size values for all levels
INFO (CarpetX): Setting up initial conditions...
INFO (CarpetX): Iteration: 0   time: 0   delta_time: 0.25
INFO (CarpetX): Patch 0:
INFO (CarpetX):   Grid extent:
INFO (CarpetX):     gsh=[67,67,67]
INFO (CarpetX):     blocking_factor=[8,8,8]
INFO (CarpetX):     max_grid_size=[32,32,32]
INFO (CarpetX):     max_tile_size=[1024000,16,32]
INFO (CarpetX):   Domain extent:
INFO (CarpetX):     xmin=[-16,-16,-16]
INFO (CarpetX):     xmax=[16,16,16]
INFO (CarpetX):     base dx=[0.5,0.5,0.5]
INFO (CarpetX): Initializing level 0...
INFO (CarpetX): Regridding...
INFO (CarpetX):   level 0: 8 boxes, 262144 cells (100%)
INFO (CarpetX): Initialized 1 levels
INFO (CarpetX): OutputGH: iteration 0, time 0.000000, run time 5 s
INFO (CarpetX): OutputSilo...
Results from timer "OutputSilo":
        gettimeofday: 0.102 secs
Results from timer "OutputSilo":
        getrusage: 0.085 secs
INFO (CarpetX): OutputGH done.
INFO (CarpetX): Starting evolution...
INFO (CarpetX): Shutting down...
Total GPU global memory (MB) spread across MPI: [64952 ... 64952]
Free  GPU global memory (MB) spread across MPI: [64165 ... 64165]
[The         Arena] space (MB) allocated spread across MPI: [48714 ... 48714]
[The         Arena] space (MB) used      spread across MPI: [0 ... 0]
[The  Device Arena] space (MB) allocated spread across MPI: [8 ... 8]
[The  Device Arena] space (MB) used      spread across MPI: [0 ... 0]
[The  Pinned Arena] space (MB) allocated spread across MPI: [8 ... 8]
[The  Pinned Arena] space (MB) used      spread across MPI: [0 ... 0]
AMReX (24.10) finalized
--------------------------------------------------------------------------------
Done.
```

I would like to know if this is the expected verbose and if you have any suggestions about the type of output I should use \(openpmd, silo, etc\) while running with carpetx or if it’s irrelevant.

‌

--
Ticket URL: https://bitbucket.org/einsteintoolkit/tickets/issues/2861/running-qc0-with-carpetx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.einsteintoolkit.org/pipermail/trac/attachments/20250310/9e7a718c/attachment.htm>


More information about the Trac mailing list