[Users] Wanted: PushComponents
Erik Schnetter
schnetter at cct.lsu.edu
Tue Aug 16 06:52:44 CDT 2011
On Mon, Aug 15, 2011 at 3:29 PM, Ian Hinder <ian.hinder at aei.mpg.de> wrote:
>
> On 15 Aug 2011, at 20:30, Erik Schnetter wrote:
>
>> I am dreaming of PushComponents, a simpler way to discuss patches, to
>> commit them, and to ensure test cases don't silently begin to fail.
>
> With appropriate daily monitoring of the test cases in an automated build and test system, the failures are not "silent". One could even have an email generated (to the ET list?) when a test case is newly failing. Commits which break the tests happen because people don't run the tests before committing. We need to make it easier to do so, as you suggest.
>
>> Applying a patch to the Einstein Toolkit requires commonly several stages:
>> - A problem or missing feature needs to be identified
>> - There is a bit of discussion on the mailing list or in TRAC
>> - A patch is created
>> - The test cases are run
>> - A patch is proposed
>> - The patch is reviewed
>> - The patch may be modified or updated
>> - The test cases are run (again), hopefully on several systems
>> - The committer applies the patch to a clean checkout, maybe runs the
>> test cases again
>> - The committer writes a commit message and commits/pushes the patch
>> This is a bit more complicated if several thorns are involved, if the
>> original patch author doesn't have commit rights, or if the underlying
>> problem only occurs only on a specific system.
>>
>> I suggest to define a self-contained patch format, probably stealing
>> the idea from git. This format would define the complete patch,
>> possibly for several components, in such a way that a single command
>> (PushComponents) can apply the patch, check for conflicts, create
>> commit messages, and push the commits upstream. Ideally, it would also
>> keep the distinction between author and committer.
>
> Yes - this is a good idea. Do you know if it has been done before? We might be able to re-use an existing format and tools.
>
>> Building on this patch format, we would implement a script that
>> 1. Checks out a clean copy of e.g. the Einstein Toolkit (GetComponents)
>> 2. Applies the patch locally
>> 3. Runs the test cases (and maybe also a specific parameter file)
>> locally as well as on a set of remote systems (SimFactory)
>> 4. Checks the results, and
>> 5. If all is fine: commits and pushes the patch (PushComponents)
>
> Instead of having each user doing this, I think it would be much more naturally integrated with the automated build and test system which already knows how to check out the code and run the tests (eventually on multiple systems). This does not rely on the user maintaining a session on their work system. The test machine cannot commit the patch, as it won't be given commit access to all the required repositories, and I don't know if it is a good idea to have the actual commit performed automatically anyway.
>
> One could imagine a web interface to the build and test system which accepted an uploaded "super-patch", as described above, and ran the automated tests on a fresh checkout with that patch applied. This could probably be integrated with TRAC. When finished, the user would get an email with the results and a URL to the web report for those results. Perhaps this could also add an automatic comment to the corresponding TRAC ticket. The email would say "OK to apply" if no tests newly failed. The user could then run PushComponents on the patch file, which performs the commit. If we wanted to get very fancy eventually, the test system could sign the commit.
I develop and maintain components not part of the official Einstein
Toolkit release, and I want these tested as well. I assume it will be
difficult to communicate my thorn list to the automated system, or to
give it access to these private components, and to ensure it runs the
tests on additional machines that are important to me (my notebook,
Orca at Sharcnet), even if they are not officially supported by the
Einstein Toolkit. Also, if there are two patches waiting to be
committed, they may interact badly, so that committing the first
requires re-running the test cases for the second.
While developing a patch, I want to get immediate feedback as to
whether this patch is good or not. Uploading my patch manually via a
web interface would be a bit cumbersome; there would need to be a
command line tool that tells the testing system to test the new patch,
and to do so right away (without waiting in a queue for other patches
that should be tested), and maybe to test the patch only on a
particular system, or to test only a few thorns. This all reduces
development time.
Given all these, I don't see the advantage of a centralised, automated
system. Automated tagging of patches in Trac would be nice, but isn't
necessary. And as you say, such an automated system shouldn't commit
patches automatically, which separates testing (maybe with signing)
and committing again, two actions that should be tightly integrated to
ensure people will execute them as one.
>> In this way, the currently separate actions (apply a patch, test it,
>> and commit it) are joined together in a way that is easy to use, and
>> people are not tempted any more to commit patches without testing. If
>> running the test cases takes an hour or two, this doesn't matter --
>> it's all automated, and it's all running in the background. If
>> something fails, this isn't a burden either -- after all, one only has
>> to run a single command to try again.
>
> If the test cases were faster, one could imagine a post-commit hook that ran them on some central system. I seem to remember you timing the test cases - do you see scope for reducing the testing time to, say, a few minutes?
>
>> To test whether the test suite returns the expected result, I suggest
>> to keep track in which test cases failures may be temporarily ignored.
>> This will allow us to use this new system right away, even before we
>> have corrected all currently failing test cases. That is, each new
>> commit may only reduce the number of failures.
>
> Do you propose to enforce the system, or just make it easy for people to do the right thing? Would the shame of an email to the ET list every time a test newly failed, along with the list of commits and authors that had changed since the last successful test run, be enough encouragement to avoid the need for enforcement?
I propose to simplify things for developers, including myself. I would
like to have at my disposal a command that has a higher-level notion
of "patch" (multiple components, including commit message, etc.), that
allows me to apply such a patch with a single command, to test such a
patch with a single command on multiple systems (including checking
out a few copy of the source tree), and to commit such a patch with a
single command (again with multiple components).
-erik
--
Erik Schnetter <schnetter at cct.lsu.edu> http://www.cct.lsu.edu/~eschnett/
More information about the Users
mailing list