[Users] Enabling CUDA/OpenCL thorns by default in Simfactory?
Erik Schnetter
schnetter at cct.lsu.edu
Tue May 27 10:23:48 CDT 2014
On Tue, May 27, 2014 at 10:51 AM, Frank Loeffler <knarf at cct.lsu.edu> wrote:
> On Tue, May 27, 2014 at 09:46:29AM -0500, Frank Loeffler wrote:
> > Wouldn't that mean that every checkout on such a machine fails, because
> > simfactory would unconditionally add these thorns to the thornlist,
> > regardless of whether they are actually present or not?
>
> Replying to myself: Is this really what Simfactory does - or does it
> only "enable" these thorns when they are present in the thornlist, but
> commented out? If the latter is the case I think it should be fine,
> although it might still confuse people that don't even look at the
> simfactory configuration of the machine, and use a thornlist with such a
> thorn commented out, but not present. Could we somehow let these users
> give a hint why Cactus fails with 'thorn not found', when the original
> thornlist clearly commented it out?
>
As per our previous discussion, the standard ET thorn list will (and this
is the current state)
1. Check out all CUDA and OpenCL thorns, on ALL machines
2. Disable all CUDA and OpenCL thorns when building Cactus, on ALL machines
If you are using this standard thorn list directly, then this is the only
reasonable way to go; other choices don't make sense.
Simfactory pre-processes thorn lists, in the same way it pre-processes
option lists and parameter files, expanding variables. It is important to
have such a pre-processing step, since it avoids having to do this
manually. This is how Simfactory "fixes" things for various machines, it is
an important part of how it works.
Blue Waters supports both CUDA and OpenCL. We now have two options:
1. Use the standard ET thorn list there, i.e. disabling CUDA and OpenCL by
default
2. Automatically enable CUDA and OpenCL thorns
As others mentioned, it does not make sense to enable CUDA and OpenCL
everywhere, since it would not be supported everywhere. Enabling the thorns
but adding a mechanism that turns these thorns into dummy thorns that do
nothing also doesn't make sense -- think of a BSSN thorn written in CUDA;
turning it into a dummy thorn that does nothing on a non-CUDA machine would
just mean that BSSN evolution doesn't happen there. There are machines that
support CUDA and there are machines that don't, and if you want to use CUDA
then you will need to get an error message when you try to use a machine
that doesn't support CUDA.
So -- how do we want to support CUDA on Blue Waters?
1. Ask people to manually change the thorn list, in effect asking them to
maintain one thorn list per machine
2. Let Simfactory do this, since it already knows what works on what
machine and what doesn't
I don't think there are any other choices.
In case you are wondering: What I propose won't break checking out thorns
on any machine, and won't break the build on any machine, and won't require
people to do special things for special machines. I regularly build on many
machines, and I do so in an automated manner, and I don't maintain
per-machine thorn lists or parameter files. Simfactory has an MDB that
encodes all the machines' peculiarities, and the information there is
sufficient to use all of our machines efficiently, and in an automated
manner.
There is the argument of "this would surprise people". If we are worried
about people being surprised that a CUDA thorn can't be activated on a
non-CUDA machine, then maybe we need to raise our expectations a bit. If we
are worried about the converse -- that people are surprised that CUDA is
available on Blue Waters -- then maybe we should re-think our choice of
what the default should be (currently: CUDA not available). I'm sure people
can live with CUDA "suddenly" being available on Blue Waters.
-erik
--
Erik Schnetter <schnetter at cct.lsu.edu>
http://www.perimeterinstitute.ca/personal/eschnetter/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20140527/4d3979d4/attachment.html
More information about the Users
mailing list