[Users] global reductions in specified domains
Vassilios Mewes
vassilios.mewes at uv.es
Fri Nov 15 10:26:53 CST 2013
Hello Roland,
thank you for your quick reply..
I have just talked to co-workers in the office (that are programming using
MPI) and they told me i could just reduce the values of rho obtained
"locally" on every processor that fall in the shell and that should give me
the right result..
this seems to be kind of similar to what you proposed..
ie (written in pseudocode)
for(int iradii=1; iradii < nradii; iradii ++)
{
LC_LOOP3 //local on each processor
{
CCTK_INT i3D = CCTK_GFINDEX3D(cctkGH, i, j, k);
r[i3D] = sqrt(x^2+y^2+z^2);
if (shell_radius[iradii-1] < r[i3D] && r[i3D] < shell_radius[iradii])
{
rho_in_shell[iradii] += rho[i3D];
}
}
}
and then reduce over all radial shells
for(int iradii=0; iradii < nradii; iradii ++)
{
CCTK_Reduce(cctkGH,-1, &rho_total_shell[iradii], CCTK_VAR =
rho_in_shell[iradii])
}
would that be an option? or even work?
best wishes,
Vassili
On Fri, Nov 15, 2013 at 5:07 PM, Roland Haas <roland.haas at physics.gatech.edu
> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hello Vassilios,
>
> > Is there a way to use the global Cactus reduction functions on
> > fractions of the entire grid?
> >
> > I am interested in summing grid scalars (say rho) in spherical
> > shells in the grid. Is there a way to perform the "sum" reduction
> > on those shells specified by a global radius variable? Or do I need
> > to write my own MPI function that sums the contributions from all
> > processors that have grid point that fall within the shell I am
> > interested in?
> There is no such facility build into Cactus unfortunately. The easiest
> way to achieve a reduction using only points satisfying some criterion
> (having coordinates inside the shells in your case) is to set up a
> helper grid function which is set to:
>
> / rho : if condition is true
> helper = {
> \ 0 : otherwise
>
> For multiple shells you will have to set up multiple helper grid
> functions (or re-use the same one several times). One way of doing
> this is:
>
> schedule.ccl:
> STORAGE: helper[1]
> schedule setup_integrand in CCTK_ANALYSIS
> {
> OPTIONS: global loop-local
> LANG: C
> } "compute 'helper' grid function for integrand"
> schedule integrate in CCTK_ANALYSIS AFTER setup_integrand
> {
> OPTIONS: global
> LANG: C
> } "reduce 'helper' grid function on all processes"
>
> You can have a look at the thorn ADMMass (in AEIThorns) and its
> ADMMass_Volume routines to see how this is done in practise.
>
> Doing the reduction by hand can be tricky since you will have to
> properly handle mesh refinement boundaries. Roughly speaking you have
> to multiply your integrand by the CarpetReduce::weight grid function
> and only include the non-ghost zones points (ie start cctk_nghostzones
> points from the edge of each component). Outer boundaries require
> special treatment (detectable via cctk_bbox) and you have to include
> all points there. I am right now not sure how symmetry boundaries need
> to be handled (ie whether for vertex centering weight is set to 1/2
> already or of you'll have to query the symmetry thorn whether an outer
> boundary is a symmetry boundary).
>
> Yours,
> Roland
>
> - --
> My email is as private as my paper mail. I therefore support encrypting
> and signing email messages. Get my PGP key from http://keys.gnupg.net.
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.15 (GNU/Linux)
> Comment: Using GnuPG with Icedove - http://www.enigmail.net/
>
> iEYEARECAAYFAlKGRr4ACgkQTiFSTN7SboWpsQCfUMEzLBth6C+s+lrqNz3Anb59
> 6eYAn3SdDPotHueI34ho6dm0rfVyKi/J
> =VQEr
> -----END PGP SIGNATURE-----
> _______________________________________________
> Users mailing list
> Users at einsteintoolkit.org
> http://lists.einsteintoolkit.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20131115/2f39f31d/attachment.html
More information about the Users
mailing list