[Users] Thorn ADM
schnetter at cct.lsu.edu
Wed Dec 4 10:57:25 CST 2013
On Dec 4, 2013, at 10:28 , Ian Hinder <ian.hinder at aei.mpg.de> wrote:
> On 3 Dec 2013, at 05:05, Frank Loeffler <knarf at cct.lsu.edu> wrote:
>> I would like to take thorn ADM out of the Einstein Toolkit thornlist
>> entirely. It's using ADMMacros (with all the problems that come with
>> it), all test cases use a static conformal metric , which isn't
>> supported anymore and which let's them currently all fail as well.
>> I just committed a bunch of changes to testsuites of other thorns that
>> were using ADM. Most of them were trivial, in one case I had to create a
>> new testsuite using ML_BSSN instead. Now no testsuite of another thorn
>> within the toolkit uses ADM. It's time to retire it. If somebody really
>> would like to use ADM it would likely be better to look at ML_ADM
>> instead (which isn't in the toolkit, but could probably be if there is
>> Please object here loudly if you disagree. If we don't hear from you in
>> some time, thorn ADM will be removed from the thornlist and will no
>> longer be part of releases.
> Hi Frank,
> On what timescale will the thorn be removed? As long as it is in the thornlist, its tests fail. Having failing tests is bad, as it obscures new failures. Since the test cases use the now-unsupported static conformal metric, I suggest that they are removed now, even if the thorn itself remains in the thornlist for some time. In the very unlikely event that someone wants to resurrect the test cases, they are in the version history anyway.
[Slightly off-topic; this is a general argument, so please don't shoot it down by arguing about the current state of thorn ADM. Also, please don't dissect my argument paragraph by paragraph -- please respond to the whole argument.]
I believe that it is unrealistic to address all test case failures immediately. The only feasible way to do so would be a hard "revert" policy, where all changes that introduce test case failures are quickly reverted. Note that this means that changes that are harmless in and itself also need to be reverted if they happen to uncover failures that were hidden before. This is probably not what we want.
We can then either live with test cases that fail for extended periods of time (probably weeks at least), until the underlying issue has been addressed. In some cases, there will even need to be some discussion on how to address the test case failures. Removing failing test cases that are not quickly corrected may also not be what we want to do generally; presumably, someone put effort into the test case.
What we need is thus a way to "disable" a test case so that it doesn't continue to fail and detract from real issues, such as a commit that introduces a genuine bug. We want to be able to "disable" a test case in such a way that it is still marked as "valid", such as e.g. as "valid in principle, but currently under construction".
Of course, marking a test case as "xfail" (expected to fail) can't be a knee-jerk reaction; this would defeat the purpose. However, this gives people a legitimate way to take their time to properly discuss and implement a solution to a complex issue that may be uncovered, without pressure to either remove a thorn or remove a test case. When used in moderation, such a mechanism can be quite valuable.
Obviously, when the time for a release comes, one would hope to reduce the number of xfail test cases, similar that one would hope to reduce the number of open bugs. In an ideal world, each bug report would be accompanied by a test case, so that one knew whether the bug had been corrected. In this case, we would definitely want to have all the (currently failing) test cases, even if it may take weeks or months to address the bugs.
Thus I suggest to add a flag for each test case that can be set to either XSUCCEED or XFAIL. Test cases that don't behave according to this expectation are then counted as failure.
Erik Schnetter <schnetter at cct.lsu.edu>
My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from http://pgp.mit.edu/.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 203 bytes
Desc: Message signed with OpenPGP using GPGMail
Url : http://lists.einsteintoolkit.org/pipermail/users/attachments/20131204/936e9e13/attachment.bin
More information about the Users