[Users] HDF5 Error, Hilbert
Erik Schnetter
schnetter at cct.lsu.edu
Wed Oct 21 09:17:33 CDT 2015
On Wed, Oct 21, 2015 at 3:25 AM, Ian Hinder <ian.hinder at aei.mpg.de> wrote:
>
> On 20 Oct 2015, at 22:13, Geraint Pratten <g.pratten at sussex.ac.uk> wrote:
>
> Thanks Ian, I think that did the trick! I rebuilt Cactus with a different
> MPI infrastructure and made sure that the appropriate MPI environment was
> loaded. Seems to be working now, e.g.
>
> INFO (Carpet): MPI is enabled
> INFO (Carpet): Carpet is running on 12 processes
> INFO (Carpet): This is process 0
> INFO (Carpet): OpenMP is enabled
> INFO (Carpet): This process contains 1 threads, this is thread 0
> INFO (Carpet): There are 12 threads in total
> INFO (Carpet): There are 1 threads per process
> INFO (Carpet): This process runs on host node203, pid=28712
> INFO (Carpet): This process runs on 24 cores: 0-23
> INFO (Carpet): Thread 0 runs on 24 cores: 0-23
>
> Thanks!
> Geraint
>
>
> Glad to hear it's working!
>
> The last line indicates that the threads are not being pinned to specific
> cores. Usually, we recommend to pin them. You should be able to do this
> by activating thorns hwloc and SystemTopology in your parameter file.
>
> Note to others: with a change in the last few months, some of the
> functionality of hwloc was moved into SystemTopology, which is not
> automatically activated. If you want to get automatic thread pinning, you
> now need to edit your parameter files to explicitly activate SystemTopology
> in addition to hwloc.
>
> Erik, could SystemTopology be automatically activated somehow?
>
Ian
Yes, the way to do this is to have another thorn (that is already
activated) optionally require this thorn. Obvious choices are the flesh, or
Carpet (if you don't care about PUGH users).
-erik
--
Erik Schnetter <schnetter at cct.lsu.edu>
http://www.perimeterinstitute.ca/personal/eschnetter/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20151021/3c646c34/attachment.html
More information about the Users
mailing list