<div dir="ltr">On Wed, Oct 21, 2015 at 3:49 AM, Roland Haas <span dir="ltr"><<a href="mailto:rhaas@aei.mpg.de" target="_blank">rhaas@aei.mpg.de</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello all,<br>
<span class=""><br>
> Glad to hear it's working!<br>
><br>
> The last line indicates that the threads are not being pinned to<br>
> specific cores. Usually, we recommend to pin them. You should be<br>
> able to do this by activating thorns hwloc and SystemTopology in your<br>
> parameter file.<br>
</span>I suggest to also use your mpirun command's pinning options if<br>
available. This can be an issue since MPI will allocate memory before<br>
thorns hwloc and SystemTopology have a change of pinning the process so<br>
that you may end up with cores having to use memory that is physically<br>
attached to the "other" sockets in multi-socket systems.<br>
<br>
There is unfortunately nothing any thorn can do about this since MPI may<br>
allocate memory as soon as MPI_Init is called by the flesh.<br></blockquote><div><br></div><div>We can introduce a mechanism by which a thorn can be called before MPI_Init is called. This would call all thorns that are compiled in, since the parameter file has not been read yet.</div><div><br></div><div>-erik</div></div><div><br></div>-- <br><div class="gmail_signature">Erik Schnetter <<a href="mailto:schnetter@cct.lsu.edu" target="_blank">schnetter@cct.lsu.edu</a>><br><a href="http://www.perimeterinstitute.ca/personal/eschnetter/" target="_blank">http://www.perimeterinstitute.ca/personal/eschnetter/</a></div>
</div></div>