[Users] Wanted: PushComponents

Erik Schnetter schnetter at cct.lsu.edu
Mon Aug 15 13:30:24 CDT 2011


I am dreaming of PushComponents, a simpler way to discuss patches, to
commit them, and to ensure test cases don't silently begin to fail.

Applying a patch to the Einstein Toolkit requires commonly several stages:
- A problem or missing feature needs to be identified
- There is a bit of discussion on the mailing list or in TRAC
- A patch is created
- The test cases are run
- A patch is proposed
- The patch is reviewed
- The patch may be modified or updated
- The test cases are run (again), hopefully on several systems
- The committer applies the patch to a clean checkout, maybe runs the
test cases again
- The committer writes a commit message and commits/pushes the patch
This is a bit more complicated if several thorns are involved, if the
original patch author doesn't have commit rights, or if the underlying
problem only occurs only on a specific system.

I suggest to define a self-contained patch format, probably stealing
the idea from git. This format would define the complete patch,
possibly for several components, in such a way that a single command
(PushComponents) can apply the patch, check for conflicts, create
commit messages, and push the commits upstream. Ideally, it would also
keep the distinction between author and committer.

Building on this patch format, we would implement a script that
1. Checks out a clean copy of e.g. the Einstein Toolkit (GetComponents)
2. Applies the patch locally
3. Runs the test cases (and maybe also a specific parameter file)
locally as well as on a set of remote systems (SimFactory)
4. Checks the results, and
5. If all is fine: commits and pushes the patch (PushComponents)

In this way, the currently separate actions (apply a patch, test it,
and commit it) are joined together in a way that is easy to use, and
people are not tempted any more to commit patches without testing. If
running the test cases takes an hour or two, this doesn't matter --
it's all automated, and it's all running in the background. If
something fails, this isn't a burden either -- after all, one only has
to run a single command to try again.

To test whether the test suite returns the expected result, I suggest
to keep track in which test cases failures may be temporarily ignored.
This will allow us to use this new system right away, even before we
have corrected all currently failing test cases. That is, each new
commit may only reduce the number of failures.

-erik

-- 
Erik Schnetter <schnetter at cct.lsu.edu>   http://www.cct.lsu.edu/~eschnett/


More information about the Users mailing list