[Users] Schedule options and uninitialized refinement levels

Roland Haas rhaas at illinois.edu
Fri Sep 27 15:53:52 CDT 2024


Hello Jordan,

> By looking at the data on the grid, you mean like using a print in
> the code? Or is there another way through the parameter file? Indeed,
> what I was referring to was 'out_var = "density_rho"' in the
> parameter file.

Yes, if you were to add a printf (or so) statement then you would see
the data on the grid eg during the next iteration (or in another
`global loop-local` routine in ANALYS, but not in a `local` routine in
ANALYSIS which would have the same issue as output).

> To clarify:
> The density_rho and density_p grid functions are computed for output
> purposes. The variables dE_gf_volume, ... are auxiliary grid
> functions which are used to compute total_energy, ... through a sum
> reduction. So those only really make sense every coarse time step,
> thus do_analysis_every should indeed be a multiple of every_coarse. I
> used a smaller one only for debugging and understanding purposes. By
> the way, is "every_coarse" an actual parameter that I could call, for
> instance in a ParamCheck function, to ensure that do_analysis_every
> is well-chosen indeed? 

No, there is not predefined such variable. It is just a "speaking"
variable name.

> Or is that ultimately up to the user to
> properly design the parameter file?

It is up to the user. You can define some helper variables if you like
in parameter files eg like so:

$every_coarse = 2**(Carpet::max_num_levels - 1)

assuming all your time_refinement_factors are 2. It is up to the user
to ensure that this is correct though.

> All operations are indeed pointwise, there are no derivatives
> involved. Even though the GF variables are defined with 3 time
> levels, it feels to me that only one is necessary indeed (unless I'm
> missing something, but these are not evolved variables). 

The 3 time levels would be used for interpolation in time if you ask
for reduction output at timesteps that are not coarse time steps.

> Do I
> understand correctly then, that in that case I will not get incorrect
> results in the regions of the coarse grid that are overlaid by the
> fine grid? In particular, as I compute the sum reduction of
> dE_gf_volume to get the integral total_energy, the result I get is
> very sensible. I was wondering if the reduction operation was somehow
> "magically" navigating the finer levels behind the scenes, but from
> what you say, it really does only the reduction on the coarser level,
> doesn't it?

It does navigate the finer levels. The "sum" reduction computes a
Riemann sum taking the (relative) volume of grid cells on the
refinement levels into account. It also takes care of removing
double-counting due to the overlapping fine / coarse grids.

In your case, purely pointwise calculation, all will be fine and the
answer will be correct.


For non-pointwise operations there is one point where you fill get very
slightly different answer than what would be most correct: at the last
fine grid point, where the grid transitions to the coarse grid the data
will be slightly wrong since for that one location both the coarse and
the fine grid contribute half a grid cell each but the coarse grid data
has not been updated via restriction from the fine, so will result in
slightly different answers than expected.

Basically if I try and draw this and give the weight that is used for
each grid cell then this is what things look like:

                           transition
			       v

fine level :                   x   x   x   x   x
weight:                       1/2  1   1   1  1/2
cell boundary:               ^   ^   ^   ^   ^   ^
integration:                   |***************|
 
coarse level:          x       x       x       x       x
weight                 1      1/2      0      1/2      1
cell boundary:      ^      ^       ^       ^       ^
integration:     **************|               |******************

So you can see that for the cell that I have marked with "transition"
when it comes to the Riemann sum, half the contribution should come from
the fine grid (the right hand half of the fine cell centered at that
location) and half from the coarse grid (the left hand half of the
coarse cell centered at that same location).

Without the restriction the answer computed on the coarse grid for the
grid point marked by the "transition" marker, will be slightly
different (since the grid spacing is larger, the neighbouring values
are different) than on the fine grid. For a purely pointwise
calculation the same number will be computed on the fine and one the
coarse grid.

> I'm also not sure about the SYNC then. In the routine
> UAv_Analysis_gfs, the loop actually excludes ghost zones, in the
> fashion of do k = 1+cctk_nghostzones(3),
> cctk_lsh(3)-cctk_nghostzones(3) so it may seem pointless, except for
> visualization purposes, right? (for instance, in VisIt)

A SYNC will fill in values for the ghost zones and mesh refinement
boundaries. Those are indeed used by visualization. They are skipped by
reductions. 

Note that by using

k = 1+cctk_nghostzones(3),cctk_lsh(3)-cctk_nghostzones(3) 

you are also skipping a layer of ghost points at the outer (physical)
boundaries, and those *will* be used by the reduction (or at least the
innermost boundary point will be used, with a weight of 1/2).

> Given the properties and goals of the quantities I'm using, and what
> you said, it sounds like I could leave that in ANALYSIS. 

For just output you are fine. You cannot use anything computed in
ANALYSIS in EVOL though (or at least it may not be what you expect it
to be).

> But you
> seemed to favor EVOL. 

EVOL is safer since it can be used both in EVOL and in ANALYSIS /
POSTSTEP, and is the only option for anything involving derivatives. So
if you ask which option to choose and I do not want / cannot safely
give you the detailed reasoning above / actually verify that things
work as I think they should, I'll err on the side of caution. Note that
CCTK_POSTSTEP and MoL_PostStep are very different.

> What would now be your advice, with the
> additional information? I still need to get the mask from
> AHFinderDirect, and from my understanding of the param.ccl of this
> thorn, it's at best run at POSTSTEP, isn't it?

AHFinderDirect sets the mask in either ANALSYIS or POSTSTEP (depending
on parameters). For modern codes ANALYSIS and POSTSTEP are identical
(scheduled routines in ANALYSIS could use a TRIGGER but that one runs
into the same problems I warned you about so it is not used nowadays
anymore). 

AHFidnerDirect reads variables that were set in EVOL (ADMBase
variables) and then can set variables (ahmask, the spherical surfaces)
that are usable in ANALYSIS / POSTSTEP (the mask). Using the mask (or
spherical surface for that matter) in EVOL can be tricky, it will only
kind of work if there is only one time level in which case the previous
value of the mask will be used in EVOL.

Yours,
Roland

-- 
My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from http://pgp.mit.edu .
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.einsteintoolkit.org/pipermail/users/attachments/20240927/937a6b46/attachment.sig>


More information about the Users mailing list