[Users] McLachlan test suite

Peter Diener diener at cct.lsu.edu
Wed Jun 16 17:48:24 CDT 2010


Hi Ian,

On Thu, 17 Jun 2010, Ian Hinder wrote:

>
> On 16 Jun 2010, at 20:26, Frank Loeffler wrote:
>
>> On Wed, Jun 16, 2010 at 04:11:20PM +0200, Ian Hinder wrote:
>>> I have removed output for the reduction, but I now get differences
>>> above tolerance in the Hamiltonian constraint:
>>
>> I see the same, for OMP_NUM_THREADS=1 and two mpi processes. That
>> level
>> or errors in the constraint is probably to be expected and we should
>> increase the tolerance value for those variables.
>
> I was thinking, very naively, that since the Hamiltonian constraint
> contains 2nd derivatives of the metric, when dividing by dx^2, we
> would expect the roundoff errors to be amplified by about two orders
> of magnitude.  So if we are just barely meeting the default 1e-11
> Cactus tolerance in the evolved variables, then the constraints might
> be worse.  But I don't really understand this.
>
> However, I have just tried using the Intel compiler options "-fp-model
> precise -fp-model source" as recommended by Intel if you want
> reproducibility of floating point computations,
>
> 	http://software.intel.com/en-us/articles/consistency-of-floating-point-results-using-the-intel-compiler/
>
> and if you do that, the test suites pass with "files identical" with
> 1, 2 and 8 threads on my laptop.  So this confirms that the problem
> comes from a behaviour of the compiler, and not from a problem in the
> code.  Of course we would like to test the compiler settings we
> actually use in production, and the above settings probably cause a
> significant loss of optimisation.

That is good to know.

>> However, we currently
>> cannot do that on a per variable-basis and cannot implement and test
>> this to be ready for the release. Thus, I suggest to split the
>> testsuite
>> for the moment into two almost identical suites, one generating only
>> the
>> contraint output, and one generating all other output. Those two can
>> then have different tolerance levels. Once we implemented something in
>> Cactus which can set the tolerance on a per-file basis, we could
>> combine
>> those again.
>
>
> Is it right to set the tolerance on what we observe, rather than on
> what we conclude we should expect? It feels a bit like cheating to me.

Good look with tracing roundoff errors through a BSSN RHS evaluation 
given some data....

Cheers,

   Peter


More information about the Users mailing list