[Users] Thoughts on external libraries
Christian D. Ott
cott at tapir.caltech.edu
Fri Jun 10 21:14:10 CDT 2016
On 6/8/16 1:50 PM, Erik Schnetter wrote:
> On Wed, Jun 8, 2016 at 12:20 PM, Frank Loeffler <knarf at cct.lsu.edu
> <mailto:knarf at cct.lsu.edu>> wrote:
> Thanks Erik for the clear and sadly very true state of external
> On Wed, Jun 08, 2016 at 11:22:44AM -0400, Erik Schnetter wrote:
> > In the new system, there will be two kinds of external libraries:
> > - like MPI or HDF5: an external library must be present that Cactus uses
> > - like LORENE: we always build our own (more on this below)
> The two types would be "likely to be found as system library someplace",
> and "not..." ?
> > (6) using a professional package manager that doesn't require root access,
> > such as Spack <https://github.com/LLNL/spack>
> spack looks promising at a glance, and I didn't have a closer look to
> say more. It requires python >2.6 (which would be a change for Cactus),
> but that shouldn't really be a big issue.
> It is intriguing to try it, but it is also still quite new, which makes
> it more likely to fail in 'unusual' situations (like compilers that
> require LD_LIBRARY_PATH or such).
> I would argue that most users have most libraries installed on most of
> the machines anyway, leaving HPC machines out, for which we provide
> specific configurations anyway. If they don't, most of the time they
> would be able to use a system library if they would just install it
> using their native package manager.
> I assert that most users would want Cactus to use those libraries, if
> present. That means Cactus would need to find them, be able to check if
> they are usable, and how they can be used. And that is exactly where we
> currently have most of our problems.
> That doesn't mean we should change the way we distribute and build
> libraries. Your ideas sound very good. They would avoid some of the
> kinds of problems we had with external libraries, but only some - except
> you propose to have a "let us build, or specify everything by hand"
> Do you know enough of Spack and would be willing to try to use it to
> replace the 'build' functionality in some external libraries, and
> instead provide a simple script that users would use to 'setup a Cactus
> environment' before they would start building configurations?
> Yes, I've been experimenting with Spack. You obviously need to make a
> few (machine-specific) choices, such as choosing which compiler to use,
> and/or which existing MPI environment (or building MPI with Spack). If
> you build packages independently, then there's a chance of conflicts,
> e.g. you build PETSc (which triggers a build of HDF5, using the standard
> options), and then you build HDF5 with its C++ interface, and then you
> end up with two HDF5 libraries that are incompatible. The solution is to
> either carefully specify dependencies, or to build everything in one go
> so that Spack can ensure consistency.
> I haven't yet found a good way to get the paths to the built libraries
> out of Spack. You can explicitly query Spack, or Spack can generate
> definitions for "module" (or you can build "lmod" with Spack...).
> Currently, this requires setting up a few environment variables before
> building Cactus. I'd rather have a system where the settings are written
> into an options file that Cactus (or Simfactory) then uses, but I
> haven't set this up yet.
> I'm using Spack on several systems (laptop, workstation, Comet,
> Stampede), and it works fine there.
I would like to second Erik's initial suggestion of removing basic libs
such as MPI from ExternalLibraries. I and other users mess around a lot
with the code and frequently do make ...-realclean to get back to a sane
state. Having to recompile MPI every time I do a realclean is just
crazy. Same thing for HDF5. Of course, this can be manually overridden,
but the way to do this has changed a few times and I've often run into
issues when I've rebased the flesh and ExternalLibraries to Einstein
Toolkit/Cactus master (which I do about every few months).
I think, quite fundamentally, we probably may want to think about three
kinds of user groups: newbies, somewhat experienced computational
scientists, and seasoned computational scientists. For newbies, a
virtual machine image or docker instance that has everything
preinstalled is the best solution, I think. This won't be HPC, but, I
think, if someone wants to run Einstein Toolkit for production/science,
they should probably be able to specify library paths and/or compile
baseline libraries. For people with intermediate experience, we could
provide simple tutorials on how to compile core libraries if they are
not available on their system. Experts don't need help to compile MPI
and HDF5. What they need are clear instructions of how to point
Cactus/Einstein Toolkit to the relevant include and lib dirs.
More information about the Users