<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr">
<div id="divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:rgb(0,0,0); font-family:Calibri,Helvetica,sans-serif,"EmojiFont","Apple Color Emoji","Segoe UI Emoji",NotoColorEmoji,"Segoe UI Symbol","Android Emoji",EmojiSymbols">
<p>Hi Roland,</p>
<p><br>
</p>
<p>Thank you very much once again for the detailed answer!</p>
<p>In the meantime, I had a look at CarpetReduce's reduce.cc. Without going into full details, I could understand that reductions would indeed go through the various levels, with the correct multiplicative factor, as I first thought. But your explanation adds
another layer of understanding.</p>
<p><br>
</p>
<p><br>
</p>
<p>While I'm at it, I would like to ask another question that came up during this experimentation process, regarding ghost points.</p>
<p>I could notice that at the first iteration which completes a time step for a given level - for instance, level N at iteration 1, level N-1 at iteration 2, level N-2 at iteration 4... - some points of density_rho at the edge of the refinement level were still
uninitialized. I can see that this doesn't happen for evolved variables, or when UAv_Analysis_group is scheduled in EVOL (drawing inspiration from mclachlan's ML_ADMConstraints, I put it in <span>MoL_PseudoEvolution after MoL_PostStep for experimentation</span>).</p>
<p><br>
</p>
<p>While I assumed that there would be ghost_size such points, some quick experiments rather seemed to indicate ~3*ghost_size points. I could notice that the last points with actual values, correspond to points with coordinates \pm CarpetRegrid2::radius_1[i],
where the proper end of the grid corresponds to the grid structure given in the standard output (and recovered with Kuibit, VisIt).</p>
<p>For example, with <br>
</p>
<p> CoordBase::dx = 1.0 <br>
</p>
<p> driver::ghost_size = 3 <br>
</p>
<p> CarpetRegrid2::radius_1[1] = 24.0</p>
<p>then level 1 extends up to 28.5, which is 9 points away from x=24.<br>
</p>
<p><br>
</p>
<p>Now, since UAv_Analysis variables focus on output at least every_coarse iterations, I was not too worried by this specifically. However, I have also noticed this pattern when looking at LeanBSSNMoL::ham for instance. I can see constraint violations related
to level boundaries, which I expected. But they match with CarpetRegrid::radius_1[i], and not with the grid structure, a feature which I found puzzling. I can see that at iteration 0 already.</p>
<p><br>
</p>
<p>Would you have some more explanation about this please?</p>
<p><br>
</p>
<p>Best,</p>
<p><br>
</p>
<p>Jordan<br>
</p>
<p><br>
</p>
<br>
<br>
<div style="color:rgb(0,0,0)">
<div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Roland Haas <rhaas@illinois.edu><br>
<b>Sent:</b> Friday, September 27, 2024 21:53<br>
<b>To:</b> Jordan Nicoules<br>
<b>Cc:</b> users@einsteintoolkit.org<br>
<b>Subject:</b> Re: [Users] Schedule options and uninitialized refinement levels</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt">
<div class="PlainText">Hello Jordan,<br>
<br>
> By looking at the data on the grid, you mean like using a print in<br>
> the code? Or is there another way through the parameter file? Indeed,<br>
> what I was referring to was 'out_var = "density_rho"' in the<br>
> parameter file.<br>
<br>
Yes, if you were to add a printf (or so) statement then you would see<br>
the data on the grid eg during the next iteration (or in another<br>
`global loop-local` routine in ANALYS, but not in a `local` routine in<br>
ANALYSIS which would have the same issue as output).<br>
<br>
> To clarify:<br>
> The density_rho and density_p grid functions are computed for output<br>
> purposes. The variables dE_gf_volume, ... are auxiliary grid<br>
> functions which are used to compute total_energy, ... through a sum<br>
> reduction. So those only really make sense every coarse time step,<br>
> thus do_analysis_every should indeed be a multiple of every_coarse. I<br>
> used a smaller one only for debugging and understanding purposes. By<br>
> the way, is "every_coarse" an actual parameter that I could call, for<br>
> instance in a ParamCheck function, to ensure that do_analysis_every<br>
> is well-chosen indeed? <br>
<br>
No, there is not predefined such variable. It is just a "speaking"<br>
variable name.<br>
<br>
> Or is that ultimately up to the user to<br>
> properly design the parameter file?<br>
<br>
It is up to the user. You can define some helper variables if you like<br>
in parameter files eg like so:<br>
<br>
$every_coarse = 2**(Carpet::max_num_levels - 1)<br>
<br>
assuming all your time_refinement_factors are 2. It is up to the user<br>
to ensure that this is correct though.<br>
<br>
> All operations are indeed pointwise, there are no derivatives<br>
> involved. Even though the GF variables are defined with 3 time<br>
> levels, it feels to me that only one is necessary indeed (unless I'm<br>
> missing something, but these are not evolved variables). <br>
<br>
The 3 time levels would be used for interpolation in time if you ask<br>
for reduction output at timesteps that are not coarse time steps.<br>
<br>
> Do I<br>
> understand correctly then, that in that case I will not get incorrect<br>
> results in the regions of the coarse grid that are overlaid by the<br>
> fine grid? In particular, as I compute the sum reduction of<br>
> dE_gf_volume to get the integral total_energy, the result I get is<br>
> very sensible. I was wondering if the reduction operation was somehow<br>
> "magically" navigating the finer levels behind the scenes, but from<br>
> what you say, it really does only the reduction on the coarser level,<br>
> doesn't it?<br>
<br>
It does navigate the finer levels. The "sum" reduction computes a<br>
Riemann sum taking the (relative) volume of grid cells on the<br>
refinement levels into account. It also takes care of removing<br>
double-counting due to the overlapping fine / coarse grids.<br>
<br>
In your case, purely pointwise calculation, all will be fine and the<br>
answer will be correct.<br>
<br>
<br>
For non-pointwise operations there is one point where you fill get very<br>
slightly different answer than what would be most correct: at the last<br>
fine grid point, where the grid transitions to the coarse grid the data<br>
will be slightly wrong since for that one location both the coarse and<br>
the fine grid contribute half a grid cell each but the coarse grid data<br>
has not been updated via restriction from the fine, so will result in<br>
slightly different answers than expected.<br>
<br>
Basically if I try and draw this and give the weight that is used for<br>
each grid cell then this is what things look like:<br>
<br>
transition<br>
v<br>
<br>
fine level : x x x x x<br>
weight: 1/2 1 1 1 1/2<br>
cell boundary: ^ ^ ^ ^ ^ ^<br>
integration: |***************|<br>
<br>
coarse level: x x x x x<br>
weight 1 1/2 0 1/2 1<br>
cell boundary: ^ ^ ^ ^ ^<br>
integration: **************| |******************<br>
<br>
So you can see that for the cell that I have marked with "transition"<br>
when it comes to the Riemann sum, half the contribution should come from<br>
the fine grid (the right hand half of the fine cell centered at that<br>
location) and half from the coarse grid (the left hand half of the<br>
coarse cell centered at that same location).<br>
<br>
Without the restriction the answer computed on the coarse grid for the<br>
grid point marked by the "transition" marker, will be slightly<br>
different (since the grid spacing is larger, the neighbouring values<br>
are different) than on the fine grid. For a purely pointwise<br>
calculation the same number will be computed on the fine and one the<br>
coarse grid.<br>
<br>
> I'm also not sure about the SYNC then. In the routine<br>
> UAv_Analysis_gfs, the loop actually excludes ghost zones, in the<br>
> fashion of do k = 1+cctk_nghostzones(3),<br>
> cctk_lsh(3)-cctk_nghostzones(3) so it may seem pointless, except for<br>
> visualization purposes, right? (for instance, in VisIt)<br>
<br>
A SYNC will fill in values for the ghost zones and mesh refinement<br>
boundaries. Those are indeed used by visualization. They are skipped by<br>
reductions. <br>
<br>
Note that by using<br>
<br>
k = 1+cctk_nghostzones(3),cctk_lsh(3)-cctk_nghostzones(3) <br>
<br>
you are also skipping a layer of ghost points at the outer (physical)<br>
boundaries, and those *will* be used by the reduction (or at least the<br>
innermost boundary point will be used, with a weight of 1/2).<br>
<br>
> Given the properties and goals of the quantities I'm using, and what<br>
> you said, it sounds like I could leave that in ANALYSIS. <br>
<br>
For just output you are fine. You cannot use anything computed in<br>
ANALYSIS in EVOL though (or at least it may not be what you expect it<br>
to be).<br>
<br>
> But you<br>
> seemed to favor EVOL. <br>
<br>
EVOL is safer since it can be used both in EVOL and in ANALYSIS /<br>
POSTSTEP, and is the only option for anything involving derivatives. So<br>
if you ask which option to choose and I do not want / cannot safely<br>
give you the detailed reasoning above / actually verify that things<br>
work as I think they should, I'll err on the side of caution. Note that<br>
CCTK_POSTSTEP and MoL_PostStep are very different.<br>
<br>
> What would now be your advice, with the<br>
> additional information? I still need to get the mask from<br>
> AHFinderDirect, and from my understanding of the param.ccl of this<br>
> thorn, it's at best run at POSTSTEP, isn't it?<br>
<br>
AHFinderDirect sets the mask in either ANALSYIS or POSTSTEP (depending<br>
on parameters). For modern codes ANALYSIS and POSTSTEP are identical<br>
(scheduled routines in ANALYSIS could use a TRIGGER but that one runs<br>
into the same problems I warned you about so it is not used nowadays<br>
anymore). <br>
<br>
AHFidnerDirect reads variables that were set in EVOL (ADMBase<br>
variables) and then can set variables (ahmask, the spherical surfaces)<br>
that are usable in ANALYSIS / POSTSTEP (the mask). Using the mask (or<br>
spherical surface for that matter) in EVOL can be tricky, it will only<br>
kind of work if there is only one time level in which case the previous<br>
value of the mask will be used in EVOL.<br>
<br>
Yours,<br>
Roland<br>
<br>
-- <br>
My email is as private as my paper mail. I therefore support encrypting<br>
and signing email messages. Get my PGP key from <a href="http://pgp.mit.edu">http://pgp.mit.edu</a> .<br>
</div>
</span></font></div>
</div>
</body>
</html>