[ET Trac] [Einstein Toolkit] #824: Update tests which rely on BSSN_MoL to use ML_BSSN instead
Einstein Toolkit
trac-noreply at einsteintoolkit.org
Tue May 15 13:36:43 CDT 2012
#824: Update tests which rely on BSSN_MoL to use ML_BSSN instead
---------------------+------------------------------------------------------
Reporter: hinder | Owner: rhaas
Type: task | Status: review
Priority: major | Milestone: ET_2012_05
Component: Other | Version:
Resolution: | Keywords: testsuites
---------------------+------------------------------------------------------
Comment (by rhaas):
I did a bit more digging into what could cause the observed behaviour
(that I get test failures between different machines/compilers/options
unless epsilon is huge) and just realised that the defaults for the
tolerances (from RunTestUtils.pl and the "Print tolerance table" table)
seem to be ABSTOL=1e-12, RELTOL=0. Which means we are in trouble when
using default tolerances unless the range of values is around unity. This
nicely matches my observation that the tests tend to fail for odd reasons
unless the range of all values in the files is about unity.
This then seems to be the major reason for the problem and I should be
able to avoid this be addinga RELTOL = 1e-13 (or so) to test.ccl. This
seems to be the issue by the recent testsuites. It would seem to me as if
the default tolerances should be changed to have a non-zero RELTOL value.
Would that make sense?
For the singularity the problem I had smoothing it out was that none of
the ID thorns provides nice methods to smooth out the singularity. I used
NoExcision in a number of tests to cut out a region. Unfortunately without
Carpet I can only use the "old" method and then have to fudge
Minkowski_scale to avoid discontinuities at the edge of the excised region
(and correspondingly large values of derivatives).
As far as having data close to production data is concerned I am not sure
if I agree. Our resolution in regression tests is so low that all the
physics is wrong anyway, so I would much more like to construct a test
that robustly tests the code than something that is close to production
data. I then don't mind if the answer are all systematically wrong as long
as they are still sensitive to changes in the code. In this sense any
initial data is fine (any epsilon) for me since the code that is executed
does not depend on the data. This might be very different for hydro where
the code path taken depends on the data on the grid.
--
Ticket URL: <https://trac.einsteintoolkit.org/ticket/824#comment:28>
Einstein Toolkit <http://einsteintoolkit.org>
The Einstein Toolkit
More information about the Trac
mailing list