[Users] Information about grid decomposition
breu at th.physik.uni-frankfurt.de
breu at th.physik.uni-frankfurt.de
Thu Nov 10 15:06:07 CST 2016
Dear Professor Schnetter, dear Professor Haas,
thank you very much for your answer.
The code I am using is the geodesic thorn written by Ian Hinder and Daniel
Gerlicher which I am trying to parallelize.
I have tried removing the interpolation entirely and the code ran faster
with a factor of about 30 on 96 cores, but I have yet to benchmark it to
find out where the time really goes.
The problem is that I need to interpolate the fields and their derivatives
in every timestep and I thought I could save time if the interpolation was
done locally instead of communicating the particle coordinates in every
step. So far, however, I have only used CarpetInterp and not
CarpetInterp2.
I would be very interested to participate in the telecon.
Kind regards,
Cosima Breu
> Hello Cosima, Erik,
>
> for code that finds out the "owner" of a particular location you can
> look at the code in CarpetInterp itself. I had written a small helper
> thorn that does this while at GeorgiaTech. You can find it here:
>
> https://github.com/rhaas80/MapPoints.git
>
> It provides a function MapPoints (in interface.ccl) that returns (among
> other information) the MPI rank of the owning process:
>
> CCTK_INT FUNCTION MapPoints \
> (CCTK_POINTER_TO_CONST IN cctkGH, \
> CCTK_INT IN N_dims, \
> CCTK_INT IN param_table_handle, \
> CCTK_INT IN coord_system_handle, \
> CCTK_INT IN N_interp_points, \
> CCTK_INT IN interp_coords_type_code, \
> CCTK_POINTER_TO_CONST ARRAY IN coords_list, \
> CCTK_POINTER ARRAY OUT procs, \
> CCTK_POINTER ARRAY OUT rlev)
>
> Yours,
> Roland
>
>> Cosima
>>
>> This information is available from Carpet. Carpet maintains a "grid
>> hierarchy" gh, and if you use AMR, there will be a single gh that you
>> can
>> look up in Carpet, probably as "Carpet::vhh.at(0)" (this is C++ code).
>> This
>> grid hierarchy contains the information you want -- you can look up
>> which
>> regions exist, to which process they are assigned, and you can also look
>> up
>> on which region (and thus on which process) a particular point is
>> located.
>>
>> Having said this -- are you sure that this will save a noticeable amount
>> of
>> time compared to letting Carpet handle the interpolation? Did you
>> benchmark
>> your current implementation? If you leave out the interpolation in your
>> current implementation, how much time do you save?
>>
>> I'd be happy to discuss further. A good venue would e.g. be one of the
>> Einstein Toolkit telecons that we have on Mondays. (I might not attend
>> next
>> Monday because I'll be at SC16.)
>>
>> -erik
>>
>>
>> On Thu, Nov 10, 2016 at 8:54 AM, <breu at th.physik.uni-frankfurt.de>
>> wrote:
>>
>> > Dear users of the Einstein Toolkit,
>> >
>> > I have a question: I would like to integrate the geodesic equations
>> for a
>> > large quantity of particles in parallel (e.g. one million). To save
>> > computational time, I want to make sure that the arrays containing the
>> > particle data are handled by the same process that takes care of the
>> > corresponding grid patch on which the particles are moving and only
>> > communicate particle data if a particle moves to another grid patch.
>> >
>> > Since I only want a process to communicate with the processes that
>> contain
>> > the adjacent grid patches, can I access information on which process
>> owns
>> > which grid patch and if yes, how?
>> >
>> > So far I have only found functions that return the upper and lower
>> bounds
>> > for the local process, but each process would need to know where all
>> the
>> > rest of the grid is.
>> >
>> > Kind regards,
>> >
>> > Cosima Breu
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users at einsteintoolkit.org
>> > http://lists.einsteintoolkit.org/mailman/listinfo/users
>> >
>>
>>
>>
>
>
>
> --
> My email is as private as my paper mail. I therefore support encrypting
> and signing email messages. Get my PGP key from http://keys.gnupg.net.
> _______________________________________________
> Users mailing list
> Users at einsteintoolkit.org
> http://lists.einsteintoolkit.org/mailman/listinfo/users
>
More information about the Users
mailing list