[Users] Information about grid decomposition

Roland Haas rhaas at illinois.edu
Thu Nov 10 10:22:07 CST 2016


Hello Cosima, Erik,

for code that finds out the "owner" of a particular location you can
look at the code in CarpetInterp itself. I had written a small helper
thorn that does this while at GeorgiaTech. You can find it here:

https://github.com/rhaas80/MapPoints.git

It provides a function MapPoints (in interface.ccl) that returns (among
other information) the MPI rank of the owning process:

CCTK_INT FUNCTION MapPoints \
    (CCTK_POINTER_TO_CONST IN cctkGH,            \
     CCTK_INT IN N_dims,                         \
     CCTK_INT IN param_table_handle,             \
     CCTK_INT IN coord_system_handle,            \
     CCTK_INT IN N_interp_points,                \
     CCTK_INT IN interp_coords_type_code,        \
     CCTK_POINTER_TO_CONST ARRAY IN coords_list, \
     CCTK_POINTER ARRAY OUT procs,               \
     CCTK_POINTER ARRAY OUT rlev)

Yours,
Roland

> Cosima
> 
> This information is available from Carpet. Carpet maintains a "grid
> hierarchy" gh, and if you use AMR, there will be a single gh that you can
> look up in Carpet, probably as "Carpet::vhh.at(0)" (this is C++ code). This
> grid hierarchy contains the information you want -- you can look up which
> regions exist, to which process they are assigned, and you can also look up
> on which region (and thus on which process) a particular point is located.
> 
> Having said this -- are you sure that this will save a noticeable amount of
> time compared to letting Carpet handle the interpolation? Did you benchmark
> your current implementation? If you leave out the interpolation in your
> current implementation, how much time do you save?
> 
> I'd be happy to discuss further. A good venue would e.g. be one of the
> Einstein Toolkit telecons that we have on Mondays. (I might not attend next
> Monday because I'll be at SC16.)
> 
> -erik
> 
> 
> On Thu, Nov 10, 2016 at 8:54 AM, <breu at th.physik.uni-frankfurt.de> wrote:
> 
> > Dear users of the Einstein Toolkit,
> >
> > I have a question: I would like to integrate the geodesic equations for a
> > large quantity of particles in parallel (e.g. one million). To save
> > computational time, I want to make sure that the arrays containing the
> > particle data are handled by the same process that takes care of the
> > corresponding grid patch on which the particles are moving and only
> > communicate particle data if a particle moves to another grid patch.
> >
> > Since I only want a process to communicate with the processes that contain
> > the adjacent grid patches, can I access information on which process owns
> > which grid patch and if yes, how?
> >
> > So far I have only found functions that return the upper and lower bounds
> > for the local process, but each process would need to know where all the
> > rest of the grid is.
> >
> > Kind regards,
> >
> > Cosima Breu
> >
> > _______________________________________________
> > Users mailing list
> > Users at einsteintoolkit.org
> > http://lists.einsteintoolkit.org/mailman/listinfo/users
> >  
> 
> 
> 



-- 
My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from http://keys.gnupg.net.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 163 bytes
Desc: OpenPGP digital signature
Url : http://lists.einsteintoolkit.org/pipermail/users/attachments/20161110/58de5b94/attachment.bin 


More information about the Users mailing list