From rhaas at illinois.edu Mon Sep 2 15:18:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Mon, 02 Sep 2024 15:18:01 -0500 Subject: [Users] Agenda for Thursday's Meeting Message-ID: Please update the Wiki with agenda items for Thursday's meeting. Thanks! https://docs.einsteintoolkit.org/et-docs/meeting_agenda --The Maintainers From rhaas at illinois.edu Wed Sep 4 17:15:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Wed, 04 Sep 2024 17:15:01 -0500 Subject: [Users] Einstein Toolkit Meeting Reminder Message-ID: Hello, Please consider joining the weekly Einstein Toolkit phone call at 9:00 am US central time on Thursdays. For details on how to connect and what agenda items are to be discussed, use the link below. https://docs.einsteintoolkit.org/et-docs/Main_Page#Weekly_Users_Call --The Maintainers From scupp1 at my.apsu.edu Thu Sep 5 09:42:09 2024 From: scupp1 at my.apsu.edu (Cupp, Samuel D.) Date: Thu, 5 Sep 2024 14:42:09 +0000 Subject: [Users] meeting minutes for 2024-09-05 Message-ID: ________________________________ Present: Sam, Bill, Steve, Maxwell, Peter, Leo, Roland # ET Release * release name "Sophie Germain" # Mailing list issues * ongoing struggles with managing the rejection list * users list emails are rejected by yahoo, and emails from yahoo fail to send to the list # Tickets * Ticket 2822: on homebrew, there's a problem with the pkg-config; this works to provide a workaround; would also help to offer non-static compiling option * Ticket 2821: update to petsc external library; newer compiler needs newer version of petsc, and newer petsc needs updates to the build script * several tickets have proposed solutions and are ready for review Next week Chair: Zach Minute taker: Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhaas at illinois.edu Mon Sep 9 10:59:30 2024 From: rhaas at illinois.edu (Haas, Roland) Date: Mon, 9 Sep 2024 15:59:30 +0000 Subject: [Users] survey participants wanted Message-ID: Hello all, this came to me via other channels (originally from Carole Goble, UK): As part of a PhD, we are conducting a survey that aims to gather insights into the documenting and reporting practices for research studies or experiments in computational science disciplines. Reporting practices refer to how researchers document and report the details of their research studies or experiments. Computational sciences involve using computational methods to solve scientific problems, with study methods typically being partially or entirely computational (e.g. bioinformatics, chemistry, astronomy, engineering, finance). What will your participation do? * Help us identify the common documenting and reporting practices faced by researchers. * Contribute to the development of a framework that enhances reproducibility in computational science research. * Reflect on your consideration of additional reporting practices. What is this survey about? The survey will take approximately 20-30 minutes to complete. It includes questions about your practices, experiences, and opinions on documenting/reporting research studies within your discipline. Participation is entirely voluntary and anonymous, and you can withdraw at any time without consequence. Participant information will be kept confidential and only accessed by the research team at the University of Manchester. The remaining survey responses will be anonymised and manually sanitised for privacy concerns before research analysis and may be published in a repository as part of open research practices. How to Participate? If you are interested in participating, please click the following link: https://www.qualtrics.manchester.ac.uk/jfe/form/SV_6JqMOFjTrJQ65j8 We look forward to your valuable insights and appreciate your help advancing reproducibility for computational research. -- My email is as private as my paper mail. I therefore support encrypting and signing email messages. Get my PGP key from http://keys.gnupg.net. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhaas at illinois.edu Mon Sep 9 15:18:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Mon, 09 Sep 2024 15:18:01 -0500 Subject: [Users] Agenda for Thursday's Meeting Message-ID: Please update the Wiki with agenda items for Thursday's meeting. Thanks! https://docs.einsteintoolkit.org/et-docs/meeting_agenda --The Maintainers From sdtootle at uidaho.edu Tue Sep 10 17:08:03 2024 From: sdtootle at uidaho.edu (Tootle, Samuel (sdtootle@uidaho.edu)) Date: Tue, 10 Sep 2024 22:08:03 +0000 Subject: [Users] Number of theses using the FUKA codes Message-ID: Dear NR enthusiasts, In an attempt to determine the utility of the Frankfurt University/KADATH (FUKA) initial data codes in support of student research, I am trying to ascertain the number of student theses that utilized the FUKA code. So, if you are a student or are working with a student and have used FUKA as part of your thesis work, it would be great if you could let me know. Warm regards, Samuel Tootle Postdoctoral Researcher University of Idaho https://samueltootle.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhaas at illinois.edu Wed Sep 11 17:15:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Wed, 11 Sep 2024 17:15:01 -0500 Subject: [Users] Einstein Toolkit Meeting Reminder Message-ID: Hello, Please consider joining the weekly Einstein Toolkit phone call at 9:00 am US central time on Thursdays. For details on how to connect and what agenda items are to be discussed, use the link below. https://docs.einsteintoolkit.org/et-docs/Main_Page#Weekly_Users_Call --The Maintainers From lucas.t.s.carneiro at gmail.com Thu Sep 12 10:14:43 2024 From: lucas.t.s.carneiro at gmail.com (Lucas Timotheo Sanches) Date: Thu, 12 Sep 2024 10:14:43 -0500 Subject: [Users] Meeting minutes for 2024-09-12 Message-ID: Present: Lucas (minutes), Roland, Peter Diener, Maxwell Rizzo, Zach # Etienne(chair)Timeline: * Zach reminds us that the deadline for new features to be added to the main branch is September 27 * Roland reported that Hayley Macpherson said her cosmology code won't be ready for this release, maybe it will be for the next. * Sphyncs and Machlaclan: Peter implemented the BSSN improvements on Zach's paper. He says that further testing is needed. * Baikal: Zach implemented his BSSN improvements. # Questions in the mailing list * Zach encourages people to respond to the Fuka usage survey, as it will help not only FUKA but also its users on future presentations and grant reports. # Tickets ready for review * 2824: Test to see if we can add the `listings` to the top-level LaTeX file in the thorn guide generation. * 2823: Zach asks for reviewers of his student package. Roland says that Wolfgang K. has something that can help and will forward it. Maxwell is interested in reviewing. * 2813: Reaching final stages. Users should be ready to update their CarpetX codes that use the regrid error grid function. The change should be a simple modification in `interface.ccl` files. From cuosakwe at ucalgary.ca Fri Sep 13 12:31:38 2024 From: cuosakwe at ucalgary.ca (CJ Osakwe) Date: Fri, 13 Sep 2024 17:31:38 +0000 Subject: [Users] TOVSolver interpolation algorithm Message-ID: Hello, Thank you for your response to my previous inquiry. I am now trying to model a hybrid star in the Einstein Toolkit, and I have some questions about the algorithm in the TOVSolver thorn that takes the 1D solution to the TOV equations and interpolates it to the 3D grid. I was able to extract the 1D data and plot it, but when I do, the pressure v. radius and density v. radius relations noticeably change (they both decrease sooner and reach zero sooner in the 3D case than the 1D case). I am wondering why this happens, and if it should be expected. Cheers, CJ Osakwe -------------- next part -------------- An HTML attachment was scrubbed... URL: From bozzola.gabriele at gmail.com Sat Sep 14 18:33:23 2024 From: bozzola.gabriele at gmail.com (Gabriele Bozzola) Date: Sat, 14 Sep 2024 16:33:23 -0700 Subject: [Users] TOVSolver interpolation algorithm In-Reply-To: References: Message-ID: Hi CJ Osakwe, 1D and 3D solutions look different because different coordinate systems are being used, so the meaning of "radius" changes. More specifically, Schwarzschild coordinates in 1D, but 3D interpolation is performed with isotropic coordinates, which tend to be better behaved for numerical relativity simulations. You can find more information here: https://einsteintoolkit.org/thornguide/EinsteinInitialData/TOVSolver/documentation.html Best, Gabriele On Fri, Sep 13, 2024 at 10:31?AM CJ Osakwe wrote: > Hello, > > Thank you for your response to my previous inquiry. I am now trying to > model a hybrid star in the Einstein Toolkit, and I have some questions > about the algorithm in the TOVSolver thorn that takes the 1D solution to > the TOV equations and interpolates it to the 3D grid. > > I was able to extract the 1D data and plot it, but when I do, the pressure > v. radius and density v. radius relations noticeably change (they both > decrease sooner and reach zero sooner in the 3D case than the 1D case). I > am wondering why this happens, and if it should be expected. > > Cheers, > CJ Osakwe > > _______________________________________________ > Users mailing list > Users at einsteintoolkit.org > http://lists.einsteintoolkit.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhaas at illinois.edu Mon Sep 16 15:18:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Mon, 16 Sep 2024 15:18:01 -0500 Subject: [Users] Agenda for Thursday's Meeting Message-ID: Please update the Wiki with agenda items for Thursday's meeting. Thanks! https://docs.einsteintoolkit.org/et-docs/meeting_agenda --The Maintainers From rhaas at illinois.edu Wed Sep 18 17:15:02 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Wed, 18 Sep 2024 17:15:02 -0500 Subject: [Users] Einstein Toolkit Meeting Reminder Message-ID: Hello, Please consider joining the weekly Einstein Toolkit phone call at 9:00 am US central time on Thursdays. For details on how to connect and what agenda items are to be discussed, use the link below. https://docs.einsteintoolkit.org/et-docs/Main_Page#Weekly_Users_Call --The Maintainers From scupp1 at my.apsu.edu Thu Sep 19 10:27:04 2024 From: scupp1 at my.apsu.edu (Cupp, Samuel D.) Date: Thu, 19 Sep 2024 15:27:04 +0000 Subject: [Users] Meeting minutes for 2024-09-19 Message-ID: Present: Sam, Roland, Steve, Bill, Zach, Lucas, Peter # ET Release: * currently proposed: David's ODESolver, Z4c thorn for CarpetX # Tickets * Several tickets regarding documentation; these thorns include additional packages or define new macros in the wrong place. These break when making the all-in-one documentation pdf. * 2820: PRs to resolve conflict between NewRadX and symmetry BCs still awaiting reviewer * 2818: no progress on failing tests with gcc-14 * 2814: Request to review PR to add tensoralias support in interface.ccl * 2786: PR needs to be revised now that unified Driver interface is supported by CarpetX * 2806: Roland needs to run more tests before accepting the PR #Other topics * Zach has been working to replace AEIlLocalInterp's dependency on Maple (closed-source) to Python. We can also use NRPy to further improve the thorn through its optimization methods. Next week: Chair - Peter Minute taker - Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnicoules at ua.pt Fri Sep 20 12:05:58 2024 From: jnicoules at ua.pt (Jordan Nicoules) Date: Fri, 20 Sep 2024 17:05:58 +0000 Subject: [Users] Schedule options and uninitialized refinement levels Message-ID: <345a3be00aeb4ab5a75dd82361ebda63@ua.pt> Dear all, I am writing to try to clarify a few issues that I've been facing regarding the schedule options "global loop-local" and "local" for an analysis thorn. I have a thorn with a group scheduled as schedule GROUP UAv_Analysis_Group at ANALYSIS after AHFinderDirect_maybe_do_masks In this group, a function is dedicated to computing some grid functions: schedule UAv_Analysis_gfs in UAv_Analysis_Group { LANG: Fortran SYNC: dE_gf_volume SYNC: dJ_gf_volume SYNC: quadrupole_gf_volume SYNC: density_rho SYNC: density_p } "Calculate grid functions" I encounter some issue (described below) when using, in this schedule block, the option OPTIONS: global loop-local instead of the default "local". I was wondering about the "global loop-local" option because, in thorn ADMMass, it seems like the way to go (the schedule.ccl file contains comments about it). Using "local" yields the expected behavior, at least in the configurations which I have experimented with. Since I don't have a fine understanding of these options and how they affect the schedule, I'm willing to keep "local", and yearn to understand how to choose properly. The problem that I'm facing with "global loop-local" is the following (all other things identical). I'm using a parameter file with Carpet, having 1 center and some refinement levels. My UAv_Analysis_gfs function contains an initial if statement that returns directly if cctk_iteration is not a multiple of a parameter do_analysis_every. The function sets a grid function called density_rho. At the initial time, only the finest refinement level has initialized values. For later iterations, if do_analysis_every = 2^N, then for the output of density_rho, the finest level is initialized, the next N levels are uninitialized, and the remaining levels are initialized. For instance, with CarpetRegrid2::num_levels_1 = 9 and do_analysis_every = 8, level 8 has values, levels 7, 6 and 5 don't, and levels 4 to 0 have values (except at iteration 0). In the extreme case do_analysis_every = 1, all levels have values. In the extreme case do_analysis_every = 256 (i.e every coarse level time step), only level 8 has values. These issues do not occur with option "local". Thank you for the insight and explanations that you can provide. I'm at your disposal for more information or details that could be useful. Sincerely, Jordan Nicoules -------------- next part -------------- An HTML attachment was scrubbed... URL: From cuosakwe at ucalgary.ca Mon Sep 23 11:20:53 2024 From: cuosakwe at ucalgary.ca (CJ Osakwe) Date: Mon, 23 Sep 2024 16:20:53 +0000 Subject: [Users] TOVSolver interpolation algorithm In-Reply-To: References: Message-ID: Hello Gabriele, Thanks for the response. Does this mean that the final (3D) output of the thorn is also in isotropic coordinates? Cheers, CJ ________________________________ From: Gabriele Bozzola Sent: Saturday, September 14, 2024 5:33 PM To: CJ Osakwe Cc: users at einsteintoolkit.org Subject: Re: [Users] TOVSolver interpolation algorithm [?EXTERNAL] Hi CJ Osakwe, 1D and 3D solutions look different because different coordinate systems are being used, so the meaning of "radius" changes. More specifically, Schwarzschild coordinates in 1D, but 3D interpolation is performed with isotropic coordinates, which tend to be better behaved for numerical relativity simulations. You can find more information here: https://einsteintoolkit.org/thornguide/EinsteinInitialData/TOVSolver/documentation.html Best, Gabriele On Fri, Sep 13, 2024 at 10:31?AM CJ Osakwe > wrote: Hello, Thank you for your response to my previous inquiry. I am now trying to model a hybrid star in the Einstein Toolkit, and I have some questions about the algorithm in the TOVSolver thorn that takes the 1D solution to the TOV equations and interpolates it to the 3D grid. I was able to extract the 1D data and plot it, but when I do, the pressure v. radius and density v. radius relations noticeably change (they both decrease sooner and reach zero sooner in the 3D case than the 1D case). I am wondering why this happens, and if it should be expected. Cheers, CJ Osakwe _______________________________________________ Users mailing list Users at einsteintoolkit.org http://lists.einsteintoolkit.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From bozzola.gabriele at gmail.com Mon Sep 23 11:40:54 2024 From: bozzola.gabriele at gmail.com (Gabriele Bozzola) Date: Mon, 23 Sep 2024 09:40:54 -0700 Subject: [Users] TOVSolver interpolation algorithm In-Reply-To: References: Message-ID: Hi CJ, Yes, that's correct. The solution of the TOV equation is in Schwarzschild coordinates, but it is in isotropic coordinates when mapped onto the 3D grid. Best, Gabriele On Mon, Sep 23, 2024 at 9:20?AM CJ Osakwe wrote: > Hello Gabriele, > > Thanks for the response. Does this mean that the final (3D) output of the > thorn is also in isotropic coordinates? > > Cheers, > CJ > ------------------------------ > *From:* Gabriele Bozzola > *Sent:* Saturday, September 14, 2024 5:33 PM > *To:* CJ Osakwe > *Cc:* users at einsteintoolkit.org > *Subject:* Re: [Users] TOVSolver interpolation algorithm > > [?EXTERNAL] > > Hi CJ Osakwe, > > 1D and 3D solutions look different because different coordinate systems > are being used, > so the meaning of "radius" changes. More specifically, Schwarzschild > coordinates in 1D, but > 3D interpolation is performed with isotropic coordinates, which tend to be > better behaved > for numerical relativity simulations. > > You can find more information here: > > https://einsteintoolkit.org/thornguide/EinsteinInitialData/TOVSolver/documentation.html > > Best, > Gabriele > > > > > On Fri, Sep 13, 2024 at 10:31?AM CJ Osakwe wrote: > > Hello, > > Thank you for your response to my previous inquiry. I am now trying to > model a hybrid star in the Einstein Toolkit, and I have some questions > about the algorithm in the TOVSolver thorn that takes the 1D solution to > the TOV equations and interpolates it to the 3D grid. > > I was able to extract the 1D data and plot it, but when I do, the pressure > v. radius and density v. radius relations noticeably change (they both > decrease sooner and reach zero sooner in the 3D case than the 1D case). I > am wondering why this happens, and if it should be expected. > > Cheers, > CJ Osakwe > > _______________________________________________ > Users mailing list > Users at einsteintoolkit.org > http://lists.einsteintoolkit.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils.vu at black-holes.org Mon Sep 23 14:00:03 2024 From: nils.vu at black-holes.org (Nils Vu) Date: Mon, 23 Sep 2024 12:00:03 -0700 Subject: [Users] Open slots for NR community calls Message-ID: <4A75EFED-2F1B-40D9-B075-1625073CCFBB@black-holes.org> Dear Numerical Relativists, We have open slots to present at the NR community calls this season! Please sign up here if you would like to present at one of the calls starting Oct 7: - Request a slot to present at the NR community calls: https://github.com/sxs-collaboration/nr-community-call/wiki The NR community call is a monthly meeting to foster exchange between our numerical-relativity groups and codes. We usually have two speakers from different groups give brief (10-15min) talks, often on a technical aspect of the numerical-relativity code that they are working with, presenting what works well and what they are struggling with. The talks are followed by 15min of discussion. The speakers are preferably early-career researchers, so the calls are also a good opportunity to increase your visibility to our community. As always: - Feel free to circulate this information within the NR community. - Please subscribe to our mailing list to receive info on the community calls in the future (or send a short message to nr-community-call+subscribe at black-holes.org to subscribe) and join our Slack for discussions (channel: #nr-community-call ). - Details and schedule in the wiki: https://github.com/sxs-collaboration/nr-community-call/wiki Best, Nils ? Nils Vu, Ph.D. (he/him) Sherman Fairchild Postdoctoral Fellow in Theoretical Astrophysics California Institute of Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhaas at illinois.edu Mon Sep 23 15:18:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Mon, 23 Sep 2024 15:18:01 -0500 Subject: [Users] Agenda for Thursday's Meeting Message-ID: Please update the Wiki with agenda items for Thursday's meeting. Thanks! https://docs.einsteintoolkit.org/et-docs/meeting_agenda --The Maintainers From rhaas at illinois.edu Mon Sep 23 15:47:08 2024 From: rhaas at illinois.edu (Roland Haas) Date: Mon, 23 Sep 2024 15:47:08 -0500 Subject: [Users] Schedule options and uninitialized refinement levels In-Reply-To: <345a3be00aeb4ab5a75dd82361ebda63@ua.pt> References: <345a3be00aeb4ab5a75dd82361ebda63@ua.pt> Message-ID: <20240923154708.24cb0bdf@ekohaes8.ncsa.illinois.edu> Hello Jordan, > instead of the default "local". I was wondering about the "global > loop-local" option because, in thorn ADMMass, it seems like the way > to go (the schedule.ccl file contains comments about it). Using > "local" yields the expected behavior, at least in the configurations > which I have experimented with. Since I don't have a fine > understanding of these options and how they affect the schedule, I'm > willing to keep "local", and yearn to understand how to choose > properly. Short answer: it's (mostly) only the output that is wrong (if you were to look at the data on the grid chances are it would look correct). But the data will never be exactly correct unless you compute things in EVOL. Very long answer: Scheduling with AMR and subcycling in time is fairly complicated. As a general rule (there are very few exemptions, and ANALYSIS is not one of time): * for each Cactus schedule bin (INITIAL, EVOL, POSTSTEP, ANALYSIS plus some others) the full schedule is executed for each iteration and each refinement level * coarse refinement levels are executed *first* with finer refinement levels next * scheduled functions with "OPTION: global" are executed exactly once for each iteration, either along with the coarsest refinement level for this iteration (early, forced by using global-early instead of global) or with the finest level (late, forced by using global-late instead of global) and skipped in the other traversals of the schedule, In ANALYSIS and POSTSTEP "global" is equivalent to "global-late" ie a scheduled routine marked as "global" will execute along with the "local" scheduled routines on the finest level. * "loop-local" means that there will be an explicit loop over the refinement levels and the scheduled function will be called on each refinement level (and each grid component), in a similar manner to a "local" routine (but this will all happen at the same time as the *single* refinement level that "global" attaches to executes its scheduled functions) * there's one more subtlety involved, namely there's a difference between EVOL and ANALYSIS concerning which refinement levels are considered eligible for execution. Basically speaking in EVOL a refinement level executes its scheduled grid functions if it is possible to evaluate the RHS of the equation of motion, while in ANALYSIS a grid function executes once evolution for that time has finished. In practise this means that for iteration=1 one evaluates the RHS for refinement level 0, 1, 2, ..., N since they all can evaluate the RHS (and the result is actually needed for boundary conditions for the finer levels), while *only* the finest level, refinement level N will have finished its evolution step at iteration 1 some only it will run in ANALYSIS. So what ADMMass does (see the comments in there) is having to ensure some fixed ordering between functions that are global and those that need to access grid functions. Specifically there is ADMMass_SetLoopCounter which is "global", so executes along with the *last* refinement level. Without the "global loop-local" the ADMMass_Surface scheduled function would run for each refinement level and thus would run on levels 0, 1, 2... before ADMMass_SetLoopCounter (which is "global" so runs at the same time as the finest level) has had a chance of running. It's complicated. > The problem that I'm facing with "global loop-local" is the following > (all other things identical). I'm using a parameter file with Carpet, > having 1 center and some refinement levels. My UAv_Analysis_gfs > function contains an initial if statement that returns directly if > cctk_iteration is not a multiple of a parameter do_analysis_every. > The function sets a grid function called density_rho. > > At the initial time, only the finest refinement level has initialized > values. For later iterations, if do_analysis_every = 2^N, then for > the output of density_rho, the finest level is initialized, the next > N levels are uninitialized, and the remaining levels are initialized. > > For instance, with CarpetRegrid2::num_levels_1 = 9 and > do_analysis_every = 8, level 8 has values, levels 7, 6 and 5 don't, > and levels 4 to 0 have values (except at iteration 0). In the extreme > case do_analysis_every = 1, all levels have values. In the extreme > case do_analysis_every = 256 (i.e every coarse level time step), only > level 8 has values. Hmm, this should compute the correct values for most of the grid points. However my suspicion is that when you say that is does not contain the expected values then that is based on what you see in output files if you add some sort of `out_var = "dE_gf_volume"` to the parameter files, yes? In that case you will indeed see bad data in output. Namely looking at the description above notice that "global" in ANALYSYS is "global-late" i.e. executes with the last refinement level (the finest one). However *output* is also done level by level, that is output for refinement level 0 is done first, along with all the scheduled functions for refinement level 0. So since your calculation with "global loop-local" runs *last*, things are not yet computed and you see old or garbage data in the output files. > These issues do not occur with option "local". Correct (well at least nothing obviously wrong is output), in this case the data is computed just before it is output (but continue reading for some subtle errors). *If* your calculations are all pointwise (not derivatives etc.) and you have only a *single* time level active for the grid functions that you compute *then* (and only then) can you compute things in ANALYSIS. If your do_analysis_every variable is set such that things are *only* computed (and output, or otherwise used) when all refinement levels are in sync (ie every_coarse) then you can do a SYNC (but you still don't need more than 1 timelevel) without pulling in bad data. However your answer in parts of the grid will be wrong since there will not be any restriction of the data computed on a fine grid to the coarse grid (which happens only when going from EVOL to POSTSTEP). This matters only for stencil operations but not for operations that are pointwise local (since they'd compute the same value either way). This will (mostly...) not effect any interpolation results and will not (I think) affect reductions (min, max, norm2 whatnot). But ... this is very tricky. The safest strategy is to compute everything in EVOL (not POSTSTEP), in particular if it involves stencil operations (in which case it is almost a requirement). You will need 3 timelevels though, at least if you plan on doing interpolations or reduction output at times other than every_coarse (and you need to compute at every iteration, or your are on very very thin ice). Your schedule statement makes me suspect that your operation is not fully pointwise and does instead involve a stencil operation. In that case, the best you can achieve is making sure that there is only 1 time level for the variables and that you compute only when all refinement levels are in sync in time (so every coarse). Note that you will still get incorrect results in the regions of the coarse grid that are overlaid by the fine grid (no restriction so you are left with the result computed on the coarse grid), but that may not matter much to you. > Thank you for the insight and explanations that you can provide. I'm > at your disposal for more information or details that could be useful. If you could provide the actual schedule.ccl and param.ccl files this would be helpful. Right now I am at least partially guessing. Yours, Roland -- My email is as private as my paper mail. I therefore support encrypting and signing email messages. Get my PGP key from http://pgp.mit.edu . -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From rhaas at illinois.edu Wed Sep 25 17:15:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Wed, 25 Sep 2024 17:15:01 -0500 Subject: [Users] Einstein Toolkit Meeting Reminder Message-ID: Hello, Please consider joining the weekly Einstein Toolkit phone call at 9:00 am US central time on Thursdays. For details on how to connect and what agenda items are to be discussed, use the link below. https://docs.einsteintoolkit.org/et-docs/Main_Page#Weekly_Users_Call --The Maintainers From jnicoules at ua.pt Thu Sep 26 08:49:27 2024 From: jnicoules at ua.pt (Jordan Nicoules) Date: Thu, 26 Sep 2024 13:49:27 +0000 Subject: [Users] Schedule options and uninitialized refinement levels In-Reply-To: <20240923154708.24cb0bdf@ekohaes8.ncsa.illinois.edu> References: <345a3be00aeb4ab5a75dd82361ebda63@ua.pt>, <20240923154708.24cb0bdf@ekohaes8.ncsa.illinois.edu> Message-ID: <6db4f2d86d434a8a8a30d3ea635cb0df@ua.pt> Dear Roland, Thank you very much for your reply! It does help me understanding the situation better and making sense of what I see! I am attaching the schedule.ccl and param.ccl files. Please be aware that I'm not starting from scratch and that some portions have probably been inspired by other thorns, so I'm fully open to suggestions or corrections. I have a few follow-up comments and questions. Unfortunately, I can't tune in to today's weekly meeting, but maybe eventually it will be easier to interact more directly through this means. > Short answer: it's (mostly) only the output that is wrong (if you were > to look at the data on the grid chances are it would look correct). But > the data will never be exactly correct unless you compute things in > EVOL. > Hmm, this should compute the correct values for most of the grid points. > However my suspicion is that when you say that is does not contain the > expected values then that is based on what you see in output files if > you add some sort of `out_var = "dE_gf_volume"` to the parameter files, > yes? By looking at the data on the grid, you mean like using a print in the code? Or is there another way through the parameter file? Indeed, what I was referring to was 'out_var = "density_rho"' in the parameter file. > *If* your calculations are all pointwise (not derivatives etc.) and > you have only a *single* time level active for the grid functions that > you compute *then* (and only then) can you compute things in ANALYSIS. > If your do_analysis_every variable is set such that things are *only* > computed (and output, or otherwise used) when all refinement levels are > in sync (ie every_coarse) then you can do a SYNC (but you still don't > need more than 1 timelevel) without pulling in bad data. However your > answer in parts of the grid will be wrong since there will not be any > restriction of the data computed on a fine grid to the coarse grid > (which happens only when going from EVOL to POSTSTEP). This matters > only for stencil operations but not for operations that are > pointwise local (since they'd compute the same value either way). This > will (mostly...) not effect any interpolation results and will not (I > think) affect reductions (min, max, norm2 whatnot). But ... this is very > tricky. > The safest strategy is to compute everything in EVOL (not POSTSTEP), in > particular if it involves stencil operations (in which case it is > almost a requirement). You will need 3 timelevels though, at least if > you plan on doing interpolations or reduction output at times other than > every_coarse (and you need to compute at every iteration, or your are > on very very thin ice). > Your schedule statement makes me suspect that your operation is not > fully pointwise and does instead involve a stencil operation. In that > case, the best you can achieve is making sure that there is only 1 time > level for the variables and that you compute only when all refinement > levels are in sync in time (so every coarse). Note that you will still > get incorrect results in the regions of the coarse grid that are > overlaid by the fine grid (no restriction so you are left with the > result computed on the coarse grid), but that may not matter much to > you. To clarify: The density_rho and density_p grid functions are computed for output purposes. The variables dE_gf_volume, ... are auxiliary grid functions which are used to compute total_energy, ... through a sum reduction. So those only really make sense every coarse time step, thus do_analysis_every should indeed be a multiple of every_coarse. I used a smaller one only for debugging and understanding purposes. By the way, is "every_coarse" an actual parameter that I could call, for instance in a ParamCheck function, to ensure that do_analysis_every is well-chosen indeed? Or is that ultimately up to the user to properly design the parameter file? All operations are indeed pointwise, there are no derivatives involved. Even though the GF variables are defined with 3 time levels, it feels to me that only one is necessary indeed (unless I'm missing something, but these are not evolved variables). Do I understand correctly then, that in that case I will not get incorrect results in the regions of the coarse grid that are overlaid by the fine grid? In particular, as I compute the sum reduction of dE_gf_volume to get the integral total_energy, the result I get is very sensible. I was wondering if the reduction operation was somehow "magically" navigating the finer levels behind the scenes, but from what you say, it really does only the reduction on the coarser level, doesn't it? I'm also not sure about the SYNC then. In the routine UAv_Analysis_gfs, the loop actually excludes ghost zones, in the fashion of do k = 1+cctk_nghostzones(3), cctk_lsh(3)-cctk_nghostzones(3) so it may seem pointless, except for visualization purposes, right? (for instance, in VisIt) Given the properties and goals of the quantities I'm using, and what you said, it sounds like I could leave that in ANALYSIS. But you seemed to favor EVOL. What would now be your advice, with the additional information? I still need to get the mask from AHFinderDirect, and from my understanding of the param.ccl of this thorn, it's at best run at POSTSTEP, isn't it? Many thanks once again! Best, Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: schedule.ccl Type: application/octet-stream Size: 2066 bytes Desc: schedule.ccl URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: param.ccl Type: application/octet-stream Size: 568 bytes Desc: param.ccl URL: From rhaas at illinois.edu Thu Sep 26 09:35:26 2024 From: rhaas at illinois.edu (Roland Haas) Date: Thu, 26 Sep 2024 09:35:26 -0500 Subject: [Users] meeting minutes for 2024-09-26 Message-ID: <20240926093526.2d653701@ekohaes8.ncsa.illinois.edu> Present: Roland, Bill, Lucas, Maxwell, Sam, Steve, Zach ET release ========== new thorns: * TOVola - Maxwell will review, will contact Zach, Roland will add instructions, https://bitbucket.org/einsteintoolkit/tickets/issues/2823 * Z4c: Steve will review, need test cases, https://bitbucket.org/einsteintoolkit/tickets/issues/2748/inclusion-of-z4c-in-einstein-toolkit * Roland will add proposed thorns to master Mailing list ============ * Roland provided updated on transition, some accounts are being disabled due to bad email addresses / too many bounces * could be due to transition of mailing list server and changes in email relay server used NRPy+ ===== Zach has been reworking the way NRPy+ can be used, making it possible to use. Called NRPy-inline, can contain NRPy+ code in C++ code in between MRPYSTART/NRPYEND tags which nrpy+ can then expand. Useful to eg handle tensorial expressions. Steve brought up possible ways of nicely interacting this with a make based build system. Zach has been exploring Currently only available in NRPy+ git repository, using pip install from there, documentation is very sparse right now https://github.com/nrpy/nrpy Open questions ============== * Roland will respond to comment on Jordan Nicoules question, https://lists.einsteintoolkit.org/pipermail/users/2024-September/009463.html * NR community calls are looking for speakers, https://lists.einsteintoolkit.org/pipermail/users/2024-September/009459.html Open tickets ============ * https://bitbucket.org/einsteintoolkit/tickets/issues/2824/carpetx-documentation-file-uses-extra Lucas checked that lstlisting is present in default LaTeX install on Ubuntu, Roland will test on macOS * https://bitbucket.org/einsteintoolkit/tickets/issues/2814/tensortypealias-support-in-interfaceccl Steve suggest to include this in the next ET release. Roland would prefer if things were only included in the ET once tested and documented Yours, Roland -- My email is as private as my paper mail. I therefore support encrypting and signing email messages. Get my PGP key from http://pgp.mit.edu . -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From rhaas at illinois.edu Fri Sep 27 15:53:52 2024 From: rhaas at illinois.edu (Roland Haas) Date: Fri, 27 Sep 2024 15:53:52 -0500 Subject: [Users] Schedule options and uninitialized refinement levels In-Reply-To: <6db4f2d86d434a8a8a30d3ea635cb0df@ua.pt> References: <345a3be00aeb4ab5a75dd82361ebda63@ua.pt> <20240923154708.24cb0bdf@ekohaes8.ncsa.illinois.edu> <6db4f2d86d434a8a8a30d3ea635cb0df@ua.pt> Message-ID: <20240927155352.38c05c44@ekohaes8.ncsa.illinois.edu> Hello Jordan, > By looking at the data on the grid, you mean like using a print in > the code? Or is there another way through the parameter file? Indeed, > what I was referring to was 'out_var = "density_rho"' in the > parameter file. Yes, if you were to add a printf (or so) statement then you would see the data on the grid eg during the next iteration (or in another `global loop-local` routine in ANALYS, but not in a `local` routine in ANALYSIS which would have the same issue as output). > To clarify: > The density_rho and density_p grid functions are computed for output > purposes. The variables dE_gf_volume, ... are auxiliary grid > functions which are used to compute total_energy, ... through a sum > reduction. So those only really make sense every coarse time step, > thus do_analysis_every should indeed be a multiple of every_coarse. I > used a smaller one only for debugging and understanding purposes. By > the way, is "every_coarse" an actual parameter that I could call, for > instance in a ParamCheck function, to ensure that do_analysis_every > is well-chosen indeed? No, there is not predefined such variable. It is just a "speaking" variable name. > Or is that ultimately up to the user to > properly design the parameter file? It is up to the user. You can define some helper variables if you like in parameter files eg like so: $every_coarse = 2**(Carpet::max_num_levels - 1) assuming all your time_refinement_factors are 2. It is up to the user to ensure that this is correct though. > All operations are indeed pointwise, there are no derivatives > involved. Even though the GF variables are defined with 3 time > levels, it feels to me that only one is necessary indeed (unless I'm > missing something, but these are not evolved variables). The 3 time levels would be used for interpolation in time if you ask for reduction output at timesteps that are not coarse time steps. > Do I > understand correctly then, that in that case I will not get incorrect > results in the regions of the coarse grid that are overlaid by the > fine grid? In particular, as I compute the sum reduction of > dE_gf_volume to get the integral total_energy, the result I get is > very sensible. I was wondering if the reduction operation was somehow > "magically" navigating the finer levels behind the scenes, but from > what you say, it really does only the reduction on the coarser level, > doesn't it? It does navigate the finer levels. The "sum" reduction computes a Riemann sum taking the (relative) volume of grid cells on the refinement levels into account. It also takes care of removing double-counting due to the overlapping fine / coarse grids. In your case, purely pointwise calculation, all will be fine and the answer will be correct. For non-pointwise operations there is one point where you fill get very slightly different answer than what would be most correct: at the last fine grid point, where the grid transitions to the coarse grid the data will be slightly wrong since for that one location both the coarse and the fine grid contribute half a grid cell each but the coarse grid data has not been updated via restriction from the fine, so will result in slightly different answers than expected. Basically if I try and draw this and give the weight that is used for each grid cell then this is what things look like: transition v fine level : x x x x x weight: 1/2 1 1 1 1/2 cell boundary: ^ ^ ^ ^ ^ ^ integration: |***************| coarse level: x x x x x weight 1 1/2 0 1/2 1 cell boundary: ^ ^ ^ ^ ^ integration: **************| |****************** So you can see that for the cell that I have marked with "transition" when it comes to the Riemann sum, half the contribution should come from the fine grid (the right hand half of the fine cell centered at that location) and half from the coarse grid (the left hand half of the coarse cell centered at that same location). Without the restriction the answer computed on the coarse grid for the grid point marked by the "transition" marker, will be slightly different (since the grid spacing is larger, the neighbouring values are different) than on the fine grid. For a purely pointwise calculation the same number will be computed on the fine and one the coarse grid. > I'm also not sure about the SYNC then. In the routine > UAv_Analysis_gfs, the loop actually excludes ghost zones, in the > fashion of do k = 1+cctk_nghostzones(3), > cctk_lsh(3)-cctk_nghostzones(3) so it may seem pointless, except for > visualization purposes, right? (for instance, in VisIt) A SYNC will fill in values for the ghost zones and mesh refinement boundaries. Those are indeed used by visualization. They are skipped by reductions. Note that by using k = 1+cctk_nghostzones(3),cctk_lsh(3)-cctk_nghostzones(3) you are also skipping a layer of ghost points at the outer (physical) boundaries, and those *will* be used by the reduction (or at least the innermost boundary point will be used, with a weight of 1/2). > Given the properties and goals of the quantities I'm using, and what > you said, it sounds like I could leave that in ANALYSIS. For just output you are fine. You cannot use anything computed in ANALYSIS in EVOL though (or at least it may not be what you expect it to be). > But you > seemed to favor EVOL. EVOL is safer since it can be used both in EVOL and in ANALYSIS / POSTSTEP, and is the only option for anything involving derivatives. So if you ask which option to choose and I do not want / cannot safely give you the detailed reasoning above / actually verify that things work as I think they should, I'll err on the side of caution. Note that CCTK_POSTSTEP and MoL_PostStep are very different. > What would now be your advice, with the > additional information? I still need to get the mask from > AHFinderDirect, and from my understanding of the param.ccl of this > thorn, it's at best run at POSTSTEP, isn't it? AHFinderDirect sets the mask in either ANALSYIS or POSTSTEP (depending on parameters). For modern codes ANALYSIS and POSTSTEP are identical (scheduled routines in ANALYSIS could use a TRIGGER but that one runs into the same problems I warned you about so it is not used nowadays anymore). AHFidnerDirect reads variables that were set in EVOL (ADMBase variables) and then can set variables (ahmask, the spherical surfaces) that are usable in ANALYSIS / POSTSTEP (the mask). Using the mask (or spherical surface for that matter) in EVOL can be tricky, it will only kind of work if there is only one time level in which case the previous value of the mask will be used in EVOL. Yours, Roland -- My email is as private as my paper mail. I therefore support encrypting and signing email messages. Get my PGP key from http://pgp.mit.edu . -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jnicoules at ua.pt Mon Sep 30 11:47:25 2024 From: jnicoules at ua.pt (Jordan Nicoules) Date: Mon, 30 Sep 2024 16:47:25 +0000 Subject: [Users] Schedule options and uninitialized refinement levels In-Reply-To: <20240927155352.38c05c44@ekohaes8.ncsa.illinois.edu> References: <345a3be00aeb4ab5a75dd82361ebda63@ua.pt> <20240923154708.24cb0bdf@ekohaes8.ncsa.illinois.edu> <6db4f2d86d434a8a8a30d3ea635cb0df@ua.pt>, <20240927155352.38c05c44@ekohaes8.ncsa.illinois.edu> Message-ID: Hi Roland, Thank you very much once again for the detailed answer! In the meantime, I had a look at CarpetReduce's reduce.cc. Without going into full details, I could understand that reductions would indeed go through the various levels, with the correct multiplicative factor, as I first thought. But your explanation adds another layer of understanding. While I'm at it, I would like to ask another question that came up during this experimentation process, regarding ghost points. I could notice that at the first iteration which completes a time step for a given level - for instance, level N at iteration 1, level N-1 at iteration 2, level N-2 at iteration 4... - some points of density_rho at the edge of the refinement level were still uninitialized. I can see that this doesn't happen for evolved variables, or when UAv_Analysis_group is scheduled in EVOL (drawing inspiration from mclachlan's ML_ADMConstraints, I put it in MoL_PseudoEvolution after MoL_PostStep for experimentation). While I assumed that there would be ghost_size such points, some quick experiments rather seemed to indicate ~3*ghost_size points. I could notice that the last points with actual values, correspond to points with coordinates \pm CarpetRegrid2::radius_1[i], where the proper end of the grid corresponds to the grid structure given in the standard output (and recovered with Kuibit, VisIt). For example, with CoordBase::dx = 1.0 driver::ghost_size = 3 CarpetRegrid2::radius_1[1] = 24.0 then level 1 extends up to 28.5, which is 9 points away from x=24. Now, since UAv_Analysis variables focus on output at least every_coarse iterations, I was not too worried by this specifically. However, I have also noticed this pattern when looking at LeanBSSNMoL::ham for instance. I can see constraint violations related to level boundaries, which I expected. But they match with CarpetRegrid::radius_1[i], and not with the grid structure, a feature which I found puzzling. I can see that at iteration 0 already. Would you have some more explanation about this please? Best, Jordan ________________________________ From: Roland Haas Sent: Friday, September 27, 2024 21:53 To: Jordan Nicoules Cc: users at einsteintoolkit.org Subject: Re: [Users] Schedule options and uninitialized refinement levels Hello Jordan, > By looking at the data on the grid, you mean like using a print in > the code? Or is there another way through the parameter file? Indeed, > what I was referring to was 'out_var = "density_rho"' in the > parameter file. Yes, if you were to add a printf (or so) statement then you would see the data on the grid eg during the next iteration (or in another `global loop-local` routine in ANALYS, but not in a `local` routine in ANALYSIS which would have the same issue as output). > To clarify: > The density_rho and density_p grid functions are computed for output > purposes. The variables dE_gf_volume, ... are auxiliary grid > functions which are used to compute total_energy, ... through a sum > reduction. So those only really make sense every coarse time step, > thus do_analysis_every should indeed be a multiple of every_coarse. I > used a smaller one only for debugging and understanding purposes. By > the way, is "every_coarse" an actual parameter that I could call, for > instance in a ParamCheck function, to ensure that do_analysis_every > is well-chosen indeed? No, there is not predefined such variable. It is just a "speaking" variable name. > Or is that ultimately up to the user to > properly design the parameter file? It is up to the user. You can define some helper variables if you like in parameter files eg like so: $every_coarse = 2**(Carpet::max_num_levels - 1) assuming all your time_refinement_factors are 2. It is up to the user to ensure that this is correct though. > All operations are indeed pointwise, there are no derivatives > involved. Even though the GF variables are defined with 3 time > levels, it feels to me that only one is necessary indeed (unless I'm > missing something, but these are not evolved variables). The 3 time levels would be used for interpolation in time if you ask for reduction output at timesteps that are not coarse time steps. > Do I > understand correctly then, that in that case I will not get incorrect > results in the regions of the coarse grid that are overlaid by the > fine grid? In particular, as I compute the sum reduction of > dE_gf_volume to get the integral total_energy, the result I get is > very sensible. I was wondering if the reduction operation was somehow > "magically" navigating the finer levels behind the scenes, but from > what you say, it really does only the reduction on the coarser level, > doesn't it? It does navigate the finer levels. The "sum" reduction computes a Riemann sum taking the (relative) volume of grid cells on the refinement levels into account. It also takes care of removing double-counting due to the overlapping fine / coarse grids. In your case, purely pointwise calculation, all will be fine and the answer will be correct. For non-pointwise operations there is one point where you fill get very slightly different answer than what would be most correct: at the last fine grid point, where the grid transitions to the coarse grid the data will be slightly wrong since for that one location both the coarse and the fine grid contribute half a grid cell each but the coarse grid data has not been updated via restriction from the fine, so will result in slightly different answers than expected. Basically if I try and draw this and give the weight that is used for each grid cell then this is what things look like: transition v fine level : x x x x x weight: 1/2 1 1 1 1/2 cell boundary: ^ ^ ^ ^ ^ ^ integration: |***************| coarse level: x x x x x weight 1 1/2 0 1/2 1 cell boundary: ^ ^ ^ ^ ^ integration: **************| |****************** So you can see that for the cell that I have marked with "transition" when it comes to the Riemann sum, half the contribution should come from the fine grid (the right hand half of the fine cell centered at that location) and half from the coarse grid (the left hand half of the coarse cell centered at that same location). Without the restriction the answer computed on the coarse grid for the grid point marked by the "transition" marker, will be slightly different (since the grid spacing is larger, the neighbouring values are different) than on the fine grid. For a purely pointwise calculation the same number will be computed on the fine and one the coarse grid. > I'm also not sure about the SYNC then. In the routine > UAv_Analysis_gfs, the loop actually excludes ghost zones, in the > fashion of do k = 1+cctk_nghostzones(3), > cctk_lsh(3)-cctk_nghostzones(3) so it may seem pointless, except for > visualization purposes, right? (for instance, in VisIt) A SYNC will fill in values for the ghost zones and mesh refinement boundaries. Those are indeed used by visualization. They are skipped by reductions. Note that by using k = 1+cctk_nghostzones(3),cctk_lsh(3)-cctk_nghostzones(3) you are also skipping a layer of ghost points at the outer (physical) boundaries, and those *will* be used by the reduction (or at least the innermost boundary point will be used, with a weight of 1/2). > Given the properties and goals of the quantities I'm using, and what > you said, it sounds like I could leave that in ANALYSIS. For just output you are fine. You cannot use anything computed in ANALYSIS in EVOL though (or at least it may not be what you expect it to be). > But you > seemed to favor EVOL. EVOL is safer since it can be used both in EVOL and in ANALYSIS / POSTSTEP, and is the only option for anything involving derivatives. So if you ask which option to choose and I do not want / cannot safely give you the detailed reasoning above / actually verify that things work as I think they should, I'll err on the side of caution. Note that CCTK_POSTSTEP and MoL_PostStep are very different. > What would now be your advice, with the > additional information? I still need to get the mask from > AHFinderDirect, and from my understanding of the param.ccl of this > thorn, it's at best run at POSTSTEP, isn't it? AHFinderDirect sets the mask in either ANALSYIS or POSTSTEP (depending on parameters). For modern codes ANALYSIS and POSTSTEP are identical (scheduled routines in ANALYSIS could use a TRIGGER but that one runs into the same problems I warned you about so it is not used nowadays anymore). AHFidnerDirect reads variables that were set in EVOL (ADMBase variables) and then can set variables (ahmask, the spherical surfaces) that are usable in ANALYSIS / POSTSTEP (the mask). Using the mask (or spherical surface for that matter) in EVOL can be tricky, it will only kind of work if there is only one time level in which case the previous value of the mask will be used in EVOL. Yours, Roland -- My email is as private as my paper mail. I therefore support encrypting and signing email messages. Get my PGP key from http://pgp.mit.edu . -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhaas at illinois.edu Mon Sep 30 15:18:01 2024 From: rhaas at illinois.edu (rhaas at illinois.edu) Date: Mon, 30 Sep 2024 15:18:01 -0500 Subject: [Users] Agenda for Thursday's Meeting Message-ID: Please update the Wiki with agenda items for Thursday's meeting. Thanks! https://docs.einsteintoolkit.org/et-docs/meeting_agenda --The Maintainers