[Users] More Bizarre Scheduling Behavior in a Carpet Run (Bug?!)

Zach Etienne zachetie at gmail.com
Tue Jun 2 09:27:38 CDT 2015


Hello.

I am working on writing a diagnostic that reads in ADMBase variables (e.g.,
alp, betax, etc.) from a binary black hole simulation using McLachlan. The
diagnostic is computed at CCTK_ANALYSIS, in a GLOBAL,LOOP-LOCAL context.

Call the gridfunction quantity computed by this diagnostic, diag_data_gf.
diag_data_gf is rather expensive to compute in general, so I only want it
computed and output (to, e.g., IOASCII 2D data files) every 64 iterations.

Setting regrid_every=64, I found that the diagnostic outputs
reasonable-looking data every 64 iterations until the first actual AMR grid
movement (at iteration 192). At this iteration, a large number of zones
near AMR refinement boundaries in the IOASCII 2D output of diag_data_gf are
set to undefined values (I have set IOASCII::out3D_ghosts=no). This is
undesirable behavior!

Here is what I want to happen:
1) every 64 iterations, the diagnostic computes diag_data_gf at all
gridpoints on all refinement levels at CCTK_ANALYSIS (I believe
GLOBAL,LOOP-LOCAL should do this).
2) After CCTK_ANALYSIS, diag_data_gf should be output to files

So why does my scheduling choice yield obviously wrong values near AMR
boundaries after the grids move.

********
Now here is where the situation becomes very weird:
********
When I set diag_data_gf to be computed at *every* iteration, the undefined
value problem disappears! Remember, it is being called at CCTK_ANALYSIS in
a GLOBAL,LOOP-LOCAL context, so diag_data_gf should be recomputed at all
points on all levels prior to file output. Where are these mysterious
undefined values coming from?!

Upon further analysis, even at iteration 64 the diagnostic yields
inconsistent results at all refinement levels except the finest one.

********
I have created a very simple thorn called ADMBaseMcLachlanTester (
math.wvu.edu/~zetienne/ADMBaseMcLachlanTester.tar.gz) that reproduces this
problem (in both ET 2014 11 and ET 2015 05 releases) with a minimum of
coding. All the thorn does is set a gridfunction called "myadmbaselapse",
which appropriately enough, is set to ADMBase::alp at all gridpoints.

In the thorn's par/ subdirectory, you'll find 2 parfiles:
qc0-mclachlan-setlapseevery1.par and qc0-mclachlan-setlapseevery64.par. The
former sets myadmbaselapse=alp at every cctk_iteration in CCTK_ANALYSIS,
and the latter every 64 cctk_iteration's. You will notice that the former
parfile yields reasonable data at iteration 192 in the
admbasemclachlantester::admbasemclachlantestergfs.*.asc files. However,
many nan's are produced at iteration 192 (corresponding to the first AMR
grid movement) when using the latter parfile.

What is causing this weird behavior? Have I uncovered a bug?

-Zach

*     *     *
Zachariah Etienne
Assistant Professor of Mathematics
West Virginia University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20150602/e296db84/attachment.html 


More information about the Users mailing list