[Users] Meeting Minutes

Ian Hinder ian.hinder at aei.mpg.de
Tue Mar 25 10:23:25 CDT 2014


On 25 Mar 2014, at 15:58, Erik Schnetter <schnetter at gmail.com> wrote:

> On Mar 25, 2014, at 10:30 , Ian Hinder <ian.hinder at aei.mpg.de> wrote:
> 
>> On 25 Mar 2014, at 14:53, Frank Loeffler <knarf at cct.lsu.edu> wrote:
>> 
>>> On Mon, Mar 24, 2014 at 04:07:35PM -0700, Roland Haas wrote:
>>>> According to the minutes from 2013-12-16 a decision was reached:
>>> 
>>> Thanks for digging this out. This is what I remembered.
>>> 
>>>> What is not in the notes is that Ian also desired a way to force it to
>>>> download so that for cluster where no network connection (or git) is
>>>> available at build time (or preparing for a flight) one can force
>>>> download on ones laptop then use simfactory to sync the tarballs.
>>> 
>>> I think we mentioned at least in the call that such a possibility should
>>> exist - an option to download all possibly required tarballs.
>> 
>> The obvious place to store the URL for the tarball is in the configuration.ccl file.  What is the recommended procedure for reading entries from this file outside of the build system?  It would have been nice to find that the Perl files in lib/sbin were Perl modules which could be used outside the build system, but I was disappointed to find this was not the case.  The file is parsed by lib/sbin/ConfigurationParser.pl.
> 
> I think we want to move away from building external libraries as part of the Cactus build; instead, we want to build them ahead of time, e.g. via Simfactory. This has several advantages, such as e.g. that a "make clean" doesn't require rebuilding the libraries, and that several Cactus configurations can use the same external libraries, and that one can even have one power user build the libraries on a system while all others simply use them.
> 
> One additional advantage is that building external libraries from within Cactus is actually quite difficult, since the Cactus compiler options are often "strange" and need to be "cleaned" before one can use them. The respective shell code is arcane, and often breaks between different system. Building external libraries on their own, independent of Cactus, is much simpler.
> 
> I know this because I implemented this in Simfactory3, and I definitively think this is the way to go. The current build recipes use Clang (not gcc, and not the system's "standard" compiler) for building. I did this because I was interested in C++11, and most system compilers (Intel, PGI, older versions of GCC, current Nvidia) don't support this. However, in the interest of backward compatibility we should probably also provide build recipes for (say) an older version of GCC that is supported by all compilers.

I thought about that this morning, and agree that we need a better system for the external libraries.  I don't know if this should be tied to simfactory, or should be embedded in Cactus.

However, in the short term, before simfactory 3 is deployed (and works with gcc), I would like to patch the current Cactus mechanism. A similar idea might be useful for initial data thorns which come with large binary datasets; the datasets might be better distributed only to the machines which need them, rather than making a detour via my home internet connection.

I have filtered the Cactus Boost git repository to remove the tarball, and added code to perform the download before building if needed.  The tarball is currently hosted in the downloads section of the bitbucket repository, but this could change.  Consider this a proof of concept.

https://bitbucket.org/ianhinder/boost

This thorn puts the tarball in Cactus/libcache, and reuses it if it find it there already.  It checks the md5sum of the downloaded file against a value hard-coded in the script.  It deletes old versions of the tarball that it finds before moving the new version into place.  It has been tested on Mac OS 10.8.3 (my laptop) and Scientific Linux 6.0 (Datura).  The download code is currently in the thorn, but should move into Cactus once it is shared between thorns.  You can initiate the download for this thorn by running the script, so you could do, for example,

for x in arrangements/*/*/download.sh; do $x; done

We could provide a make target or script to do this.

At the moment, the download script needs several thorn-specific pieces of information which are currently hard-coded:

DISTBASE=boost
DISTVERSION=1_55_0
DISTNAME=${DISTBASE}_${DISTVERSION}.tar.gz
DISTHASH=93780777cfbf999a600f62883bd54b17
DISTSRC=https://bitbucket.org/ianhinder/boost/downloads

I think these should go into the configuration.ccl file of the thorn, and the download script can then be a part of Cactus, reading the required information for each thorn.  However, I don't know the correct way to read this information from the CCL file from a shell script.  I could knock up something which worked, but which probably wouldn't be very robust.  It would be better for Cactus to provide an API for getting this sort of information.

-- 
Ian Hinder
http://numrel.aei.mpg.de/people/hinder

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 203 bytes
Desc: Message signed with OpenPGP using GPGMail
Url : http://lists.einsteintoolkit.org/pipermail/users/attachments/20140325/f2c95f62/attachment-0001.bin 


More information about the Users mailing list