[Users] reduction in carpet + mismatched declaration in ML_ADMQuantities

Erik Schnetter schnetter at cct.lsu.edu
Fri Jan 7 10:52:52 CST 2011


Scott

Thanks for the clarification. I should have looked up the definition
of the variables to get things right...

-erik

On Fri, Jan 7, 2011 at 11:46 AM, Scott Noble <scn at astro.rit.edu> wrote:
>
> Hello All,
>
> I just want to remind the users that
>
> dens = \sqrt(\gamma) rho W  = \sqrt(-g)  rho u^t
>
> where  \gamma is the determinant of the spatial metric, rho is the rest-mass
> density,  W is the lorentz factor,
> u^t is the time-component of the 4-velocity, and "g" is the determinant of
> the full spacetime metric.
>
> That means that Erik's first M integral has a factor of  u^t  that does not
> appear in the integrand of the second M integral.   Using "dens" and other
> already defined grid functions,  one would need to integrate
> the following integrand in order to recover  the second M integral:   (dens
> * alp / w_lorentz) .
>
>
> cheers, scott n.
>
>
>
>
> Erik Schnetter wrote:
>>
>> Hee Il
>>
>> I'm not quite sure what you mean by "volume measure", so let me
>> explain from the beginning.
>>
>> 1. Since GRHydro defines the density "dens" as a tensor density, not
>> as a tensor, the total mass is defined as
>>
>> M := \int dens dV
>>
>> where no power of the determinant of the metric is necessary any more.
>> I believe you could instead also write
>>
>> M := \int rho sqrt(det g) dV
>>
>> where rho is the non-densitized rest mass density as defined in HydroBase.
>>
>> 2. Carpet's reduction operators take the relative size of the grid
>> cells into account. That is, they take into account that finer grid
>> cells are smaller, and they also take into account that only fractions
>> of grid cells may contribute near refinement boundaries. (Refinement
>> boundaries can cut grid cells if you use vertex centered refinement.)
>> However, Carpet's reduction operators do not take the coordinate
>> system into account.
>>
>> This means that you need to multiply the result with the coarse grid
>> coordinate volume. (This should probably be changed at some point.)
>>
>> -erik
>>
>> On Fri, Jan 7, 2011 at 12:00 AM, Hee Il Kim <heeilkim at gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> Still I'm confused with the carpet reduction. The basic reduction method
>>> in
>>> Christian's code was same with mine. But I thought what I did before
>>> should
>>> be improved. First of all my expression "integration over only the
>>> coarsest
>>> level" is very inaccurate. Anyway let me ask again.
>>>
>>> What people do in reduction of , e.g. "Mass = Integration of density *
>>> volume_measure" is;
>>>
>>> ------  Method I --------
>>> * In local mode at mol_pseudoevolution or poststep
>>> define Integrand = density      ! note it's not "density *
>>> volume_measure"
>>>
>>> * In global mode at analysis
>>> CCTK_Reduce(Integrand --> Mass_temp)
>>> Mass = Mass_temp * product(cctk_delta_space)
>>> ------------------------------
>>>
>>> Why do people define "integrand = density" not as "density *
>>> volume_measure"? However, if i do take the latter, then I get very weird
>>> result, so people might have not taken that. To my understanding, the
>>> above
>>> Method I will produce;
>>>
>>> Result I: Mass = (Sum_reflev=0 of capret weighted density(i,j,k) * dV_0)
>>> +
>>> (Sum_reflev=1 of carpet weighted density(i,j,k) * dV_0) + ...
>>>
>>> But what I wanted (expected) to get by defining integrand = density *
>>> volume_measure" is more accurate one;
>>>
>>> Result II: Mass = (Sum_reflev=0 of carpet weighted density(i,j,k) * dV_0)
>>> +
>>> (Sum_reflev=1 of carpet weighted density(i,j,k) * 1/8 * dV_0) + ...
>>>
>>> What is the above Method I doing for? Result I or Result II, or something
>>> else?
>>> The new reduction API mentioned in the reference manual is still
>>> underdevelopment?
>>>
>>> Thanks in advance,
>>>
>>> Hee Il
>>>
>>>
>>>
>>> 2011/1/6 Hee Il Kim <heeilkim at gmail.com>
>>>>
>>>> Christian, thanks so much !
>>>>
>>>> Your code seems to contain everything I wanted to calculate.
>>>>
>>>> There's one minor mismatched declaration in ML_ADMQuantities which
>>>> should
>>>> be fixed anyhow.
>>>>
>>>> ####
>>>>
>>>> [khi at zenith src]$ diff ML_ADMQuantities.c.ori ML_ADMQuantities.c
>>>> 56c56
>>>> <   const char *groups[] =
>>>>
>>>> {"grid::coordinates","Grid::coordinates","McLachlan::ML_curv","McLachlan::ML_Gamma","McLachlan::ML_lapse","McLachlan::ML_log_confac","McLachlan::ML_metric","McLachlan::ML_shift","McLachlan::ML_trace_curv","ML_ADMQuantities::ML_Jadm","ML_ADMQuantities::ML_Madm"};
>>>> ---
>>>>>
>>>>>   const char *groups[] =
>>>>>
>>>>> {"grid::coordinates","Grid::coordinates","ML_BSSN::ML_curv","ML_BSSN::ML_Gamma","ML_BSSN::ML_lapse","ML_BSSN::ML_log_confac","ML_BSSN::ML_metric","ML_BSSN::ML_shift","ML_BSSN::ML_trace_curv","ML_ADMQuantities::ML_Jadm","ML_ADMQuantities::ML_Madm"};
>>>>
>>>> ####
>>>>
>>>> Thanks again,
>>>>
>>>> Hee Il
>>>>
>>>> 2011/1/5 Christian D. Ott <cott at tapir.caltech.edu>
>>>>>
>>>>> Dear Hee Il,
>>>>>
>>>>> I agree that getting this right is tricky. You must set up your grid
>>>>> functions that you want to reduce in MoL_PseudoEvolution in local
>>>>> mode, then do the reduction in CCTK_Analysis in global mode.
>>>>>
>>>>> I am attaching a Thorn called ZelmaniAnalysis that I use for the
>>>>> kinds of things you want. The reduction works with Carpet.
>>>>>
>>>>> Best,
>>>>>
>>>>>  - Christian
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jan 05, 2011 at 09:18:21PM +0900, Hee Il Kim wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Belated Happy New Year !
>>>>>>
>>>>>> I am writing a simple code for the calculation of ADM mass and angular
>>>>>> momentum by using ML_ADMQuantities. As you know,  it requires
>>>>>> reduction
>>>>>> procedure for the summation of the integrand, e.g., density * volume
>>>>>> measure. I've tried various reduction APIs including old deprecated
>>>>>> one, but
>>>>>> I could only succeed in the integration only over the coarsest grid
>>>>>> level,
>>>>>> which was not accurate.
>>>>>>
>>>>>> If I use the new reduction API in the reference manual of Cactus,
>>>>>> CCTK_ReduceGridArrays;
>>>>>>
>>>>>> CCTK_ReduceGridArrays(const cGH *GH,
>>>>>> int dest_processor,
>>>>>> int local_reduce_handle,
>>>>>> int param_table_handle,
>>>>>> int N_input_arrays,
>>>>>> const CCTK_INT input_array_variable_indices[],
>>>>>> int M_output_values,
>>>>>> const CCTK_INT output_value_type_codes[],
>>>>>> void* const output_values[]);
>>>>>>
>>>>>> , I encounter the following error
>>>>>> ....
>>>>>> WARNING[L3,P0] (Cactus): CCTK_ReduceGridArrays: no grid array
>>>>>> reduction
>>>>>> registered
>>>>>> WARNING level 0 in thorn ML_Analysis processor 0 host
>>>>>> compute-0-5.local
>>>>>>  (line 159 of Evaluate_ADMQuantities.F90):
>>>>>>  -> Failed to integrate J_z
>>>>>>
>>>>>> Param_table_handle is not clear to me but it seems not causing the
>>>>>> problem.
>>>>>> The reduction operators are registered by CactusBase/LocalReduce.
>>>>>>
>>>>>> Could you let me know which reduction function I have to use for this
>>>>>> calculation? I mean the integration over the whole grid levels. Should
>>>>>> I do
>>>>>> loop over refinenment levels;
>>>>>> ...
>>>>>> do loop over reflev
>>>>>>  call CCTK_Reduce
>>>>>> enddo
>>>>>> ...
>>>>>>
>>>>>> Thanks in advance,
>>>>>>
>>>>>> Hee Il
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at einsteintoolkit.org
>>>>>> http://lists.einsteintoolkit.org/mailman/listinfo/users
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at einsteintoolkit.org
>>> http://lists.einsteintoolkit.org/mailman/listinfo/users
>>>
>>>
>>
>>
>>
>



-- 
Erik Schnetter <schnetter at cct.lsu.edu>   http://www.cct.lsu.edu/~eschnett/


More information about the Users mailing list