[Users] SPH in EinsteinToolkit

Stefan Ruehe sruehe at astrophysik.uni-kiel.de
Thu Nov 26 02:47:19 CST 2015


 

Hello, 

first I agree a variable smoothing length is the better choise in a SPH
algorithm. 

As I understood, one of the bigger problems is the implementation of the
particles in Carpet, because of MPI. For using a tree structure to save
and search particles there is a global memory outside of the
parallelized threads necessary. In this case we could use variable
smoothing length without problems, I think (it is still possible to make
a search routine without trees, using the grid structure, but therefor
it is also necessary to determine the grid cell enclosing a particle). 

-Stefan 

Am 2015-11-25 21:53, schrieb Erik Schnetter: 

> Stefan 
> 
> There exist several algorithms for managing the SPH particles and finding interactions. Personally, I dislike using an algorithm that would e.g. changing the particle radius depending on the process decomposition. 
> 
> If I was to implement an SPH method for astrophysics in Cactus, then I would probably choose a binary tree. I would make the process decomposition that is implied by this tree independent of how Cactus decomposes its grid functions. Thus you can test both grid functions and particle algorithms independently. 
> 
> To couple particles and grids, you need two ingredients: 
> - Interpolate grid quantities at particle locations 
> - Deposit particle quantities onto grid 
> 
> The first already exists in Cactus; you would use the Cactus interpolator for this. 
> 
> The second is particle-specific, and this routine needs to be written. Determining the grid cell enclosing a particle is the main ingredient. For PUGH (a uniform grid) this is straightforward; for Carpet, there is a routine "gh::locate_position" that one would call. 
> 
> Apart from these considerations, I have a personal preference for algorithms that are derived from a Lagrangian. A variable smoothing length is likely important in astrophysics since you will encounter large density differences there. 
> 
> -erik 
> 
> On Wed, Nov 25, 2015 at 2:18 AM, Stefan Ruehe <sruehe at astrophysik.uni-kiel.de> wrote:
> 
>> Good morning, 
>> 
>> I have made some thoughts about the problems with SPH and MPI. 
>> 
>> I found a paper by Valdez-Balderas et al (2012) (http://adsabs.harvard.edu/abs/2012arXiv1210.1017V [1]), which could help to solve some of the problems. 
>> 
>> They suggest a particle halo on each processor unit, in which the neighbour particle of the adjacent processors are saved. This should be synchronized in each timestep. 
>> 
>> One problem of this method is SPH have to use fixed smoothlength, otherwise the volume of the halo can't be set. The variability of the smoothlength is required to have an adaptive refinement in the SPH-algorithm. I would suggest to use semi-fixed smoothlength, which are smaller in higher refindement levels. This could reduce the disadvantages of the fixed smoothlength. 
>> 
>> What is your opinion this? 
>> 
>> Now I try to test how good SPH-approximations for the hydrodynamic grid variables in the Tmunu base are under the condition of such "adaptive-fixed" smoothlength. I have an other method in mind, but this would need more temporary memory. 
>> 
>> Best regards, 
>> 
>> Stefan Ruehe 
>> 
>> _______________________________________________
>> Users mailing list
>> Users at einsteintoolkit.org
>> http://lists.einsteintoolkit.org/mailman/listinfo/users [2]
> 
> -- 
> 
> Erik Schnetter <schnetter at cct.lsu.edu>
> http://www.perimeterinstitute.ca/personal/eschnetter/ [3]
 

Links:
------
[1] http://adsabs.harvard.edu/abs/2012arXiv1210.1017V
[2] http://lists.einsteintoolkit.org/mailman/listinfo/users
[3] http://www.perimeterinstitute.ca/personal/eschnetter/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20151126/dd608af2/attachment.html 


More information about the Users mailing list