[Users] Contamination in restarting using Carpet mesh refinement
Kentaro Takami
kentaro.takami at aei.mpg.de
Fri Mar 29 13:33:26 CDT 2013
Dear all,
I have the following question.
In my project, it is important to compute reproducible results
when I use same executable, initial data, parameter file.
So I'm debugging and improving our code
(as you know, one of them is dissipation thorn.)
Now I can get exactly same results every time unless using Carpet
refinement being larger than 2 (i.e., only 1 refinement case is OK).
When we restart the simulation with several refinement from
checkpoint files, the results become slightly change.
So I investigated this reason, and I notice the finer grid variables are
contaminated between (A) and (B) in the following Cactus scheduling:
========== Related part of Cactus scheduling ==========
[CCTK_RECOVER_VARIABLES]
IOUtil::IOUtil_RecoverGH: [level] Checkpoint recovery routine
[CCTK_POST_RECOVER_VARIABLES]
....
(A) < IN THE END OF THIS SCHEDULE BIN, VARIABLES ARE FINE >
endif
if (checkpoint initial data)
[CCTK_CPINITIAL]
CarpetIOHDF5::CarpetIOHDF5_InitialDataCheckpoint: [meta]
Initial data checkpoint routine
endif
if (analysis)
[CCTK_ANALYSIS]
(B) < IN THE TOP OF THIS SCHEDULE BIN, VARIABLES ARE CONTAMINATED >
.....
endif
Output grid variables
do loop over timesteps
[CCTK_PREREGRID]
================================================
Apparently there is no thorn between (A) and (B).
So what is doing between (A) and (B)?
Can I avoid this changing of variables?
Do I need certain special parameters?
Further information:
In the test case, I used the following conditions.
* Carpet and CarpetRegrid2 with 2 refinement level.
(in this case, RL1 is OK, but RL2 is contaminated.)
* Of course I'm using the parameters,
Carpet::regrid_during_recovery = "no" and
CarpetIOHDF5::use_grid_structure_from_checkpoint = "yes".
* Debug options such as O0, -fp-model precise, VECTORISE = no,
OPENMP = no in compiling.
* Even if we use only 1 MPI process, still there is contamination.
Sincerely yours
Kentaro TAKAMI
More information about the Users
mailing list