[Users] Schedule options and uninitialized refinement levels
Jordan Nicoules
jnicoules at ua.pt
Thu Sep 26 08:49:27 CDT 2024
Dear Roland,
Thank you very much for your reply! It does help me understanding the situation better and making sense of what I see! I am attaching the schedule.ccl and param.ccl files. Please be aware that I'm not starting from scratch and that some portions have probably been inspired by other thorns, so I'm fully open to suggestions or corrections.
I have a few follow-up comments and questions. Unfortunately, I can't tune in to today's weekly meeting, but maybe eventually it will be easier to interact more directly through this means.
> Short answer: it's (mostly) only the output that is wrong (if you were
> to look at the data on the grid chances are it would look correct). But
> the data will never be exactly correct unless you compute things in
> EVOL.
> Hmm, this should compute the correct values for most of the grid points.
> However my suspicion is that when you say that is does not contain the
> expected values then that is based on what you see in output files if
> you add some sort of `out_var = "dE_gf_volume"` to the parameter files,
> yes?
By looking at the data on the grid, you mean like using a print in the code? Or is there another way through the parameter file? Indeed, what I was referring to was 'out_var = "density_rho"' in the parameter file.
> *If* your calculations are all pointwise (not derivatives etc.) and
> you have only a *single* time level active for the grid functions that
> you compute *then* (and only then) can you compute things in ANALYSIS.
> If your do_analysis_every variable is set such that things are *only*
> computed (and output, or otherwise used) when all refinement levels are
> in sync (ie every_coarse) then you can do a SYNC (but you still don't
> need more than 1 timelevel) without pulling in bad data. However your
> answer in parts of the grid will be wrong since there will not be any
> restriction of the data computed on a fine grid to the coarse grid
> (which happens only when going from EVOL to POSTSTEP). This matters
> only for stencil operations but not for operations that are
> pointwise local (since they'd compute the same value either way). This
> will (mostly...) not effect any interpolation results and will not (I
> think) affect reductions (min, max, norm2 whatnot). But ... this is very
> tricky.
> The safest strategy is to compute everything in EVOL (not POSTSTEP), in
> particular if it involves stencil operations (in which case it is
> almost a requirement). You will need 3 timelevels though, at least if
> you plan on doing interpolations or reduction output at times other than
> every_coarse (and you need to compute at every iteration, or your are
> on very very thin ice).
> Your schedule statement makes me suspect that your operation is not
> fully pointwise and does instead involve a stencil operation. In that
> case, the best you can achieve is making sure that there is only 1 time
> level for the variables and that you compute only when all refinement
> levels are in sync in time (so every coarse). Note that you will still
> get incorrect results in the regions of the coarse grid that are
> overlaid by the fine grid (no restriction so you are left with the
> result computed on the coarse grid), but that may not matter much to
> you.
To clarify:
The density_rho and density_p grid functions are computed for output purposes. The variables dE_gf_volume, ... are auxiliary grid functions which are used to compute total_energy, ... through a sum reduction. So those only really make sense every coarse time step, thus do_analysis_every should indeed be a multiple of every_coarse. I used a smaller one only for debugging and understanding purposes.
By the way, is "every_coarse" an actual parameter that I could call, for instance in a ParamCheck function, to ensure that do_analysis_every is well-chosen indeed? Or is that ultimately up to the user to properly design the parameter file?
All operations are indeed pointwise, there are no derivatives involved. Even though the GF variables are defined with 3 time levels, it feels to me that only one is necessary indeed (unless I'm missing something, but these are not evolved variables).
Do I understand correctly then, that in that case I will not get incorrect results in the regions of the coarse grid that are overlaid by the fine grid?
In particular, as I compute the sum reduction of dE_gf_volume to get the integral total_energy, the result I get is very sensible. I was wondering if the reduction operation was somehow "magically" navigating the finer levels behind the scenes, but from what you say, it really does only the reduction on the coarser level, doesn't it?
I'm also not sure about the SYNC then. In the routine UAv_Analysis_gfs, the loop actually excludes ghost zones, in the fashion of
do k = 1+cctk_nghostzones(3), cctk_lsh(3)-cctk_nghostzones(3)
so it may seem pointless, except for visualization purposes, right? (for instance, in VisIt)
Given the properties and goals of the quantities I'm using, and what you said, it sounds like I could leave that in ANALYSIS. But you seemed to favor EVOL. What would now be your advice, with the additional information? I still need to get the mask from AHFinderDirect, and from my understanding of the param.ccl of this thorn, it's at best run at POSTSTEP, isn't it?
Many thanks once again!
Best,
Jordan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.einsteintoolkit.org/pipermail/users/attachments/20240926/5cabadd6/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: schedule.ccl
Type: application/octet-stream
Size: 2066 bytes
Desc: schedule.ccl
URL: <http://lists.einsteintoolkit.org/pipermail/users/attachments/20240926/5cabadd6/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: param.ccl
Type: application/octet-stream
Size: 568 bytes
Desc: param.ccl
URL: <http://lists.einsteintoolkit.org/pipermail/users/attachments/20240926/5cabadd6/attachment-0001.obj>
More information about the Users
mailing list