<div dir="ltr">I added a second benchmark, using a Thornburg04 patch system, 8th order finite differencing, and 4th order patch interpolation. The results are<div><br></div><div><p style="margin:0px;font-size:10px;font-family:Menlo">original: 8.53935e-06 sec</p><p style="margin:0px;font-size:10px;font-family:Menlo">rewrite: 8.55188e-06 sec</p><p style="margin:0px;font-size:10px;font-family:Menlo"><br></p><p style="margin:0px;font-size:10px;font-family:Menlo"><span style="font-family:arial,sans-serif;font-size:small">this time with 1 thread per MPI process, since that was most efficient in both cases. Most of the time is spent in inter-patch interpolation, which is much more expensive than in a "regular" case since this benchmark is run on a single node and hence with very small grids.</span><br></p><p style="margin:0px;font-size:10px;font-family:Menlo"><span style="font-family:arial,sans-serif;font-size:small"><br></span></p><p style="margin:0px;font-size:10px;font-family:Menlo"><span style="font-family:arial,sans-serif;font-size:small">With these numbers under our belt, can we merge the rewrite branch?</span></p><p style="margin:0px;font-size:10px;font-family:Menlo"><span style="font-family:arial,sans-serif;font-size:small"><br></span></p><p style="margin:0px;font-size:10px;font-family:Menlo"><span style="font-family:arial,sans-serif;font-size:small">-erik</span></p><p style="margin:0px;font-size:10px;font-family:Menlo"><span style="font-family:arial,sans-serif;font-size:small"><br></span></p></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Jul 4, 2015 at 5:19 PM, Ian Hinder <span dir="ltr"><<a href="mailto:ian.hinder@aei.mpg.de" target="_blank">ian.hinder@aei.mpg.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div>hi Erik,</div><div><br></div><div>You could try the ones at</div><div><br></div><div><a href="https://bitbucket.org/ianhinder/cactusbench/src/faea4e13ed4232968e81edd1bbc80519198fe2b2/examples/ML_BSSN_Test/benchmark/?at=master" target="_blank">https://bitbucket.org/ianhinder/cactusbench/src/faea4e13ed4232968e81edd1bbc80519198fe2b2/examples/ML_BSSN_Test/benchmark/?at=master</a></div><div><br></div><div>I haven't updated them in a while, but hopefully the ET is sufficiently backward compatible for them to still work.<span class=""><br><br>-- <div>Ian Hinder</div><div><a href="http://members.aei.mpg.de/ianhin" target="_blank">http://members.aei.mpg.de/ianhin</a></div></span></div><div><div class="h5"><div><br>On 4 Jul 2015, at 17:04, Erik Schnetter <<a href="mailto:schnetter@cct.lsu.edu" target="_blank">schnetter@cct.lsu.edu</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr">On Sat, Jul 4, 2015 at 10:21 AM, Ian Hinder <span dir="ltr"><<a href="mailto:ian.hinder@aei.mpg.de" target="_blank">ian.hinder@aei.mpg.de</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><br><div><span><div>On 3 Jul 2015, at 22:38, Erik Schnetter <<a href="mailto:schnetter@cct.lsu.edu" target="_blank">schnetter@cct.lsu.edu</a>> wrote:</div><br><blockquote type="cite"><div dir="ltr">I ran the Simfactory benchmark for ML_BSSN on both the current version and the "rewrite" branch to see whether this branch is ready for production use. I ran this benchmark on a single node of Shelob at LSU. In both cases, using 2 OpenMP threads and 8 MPI processes per node was fastest, so I am reporting these results below. Since I was interested in the performance of McLachlan, this is a unigrid vacuum benchmark using fourth order differencing.<div><br></div><div>One noteworthy difference is that dissipation as implemented in the "rewrite" branch is finally approximately as fast as thorn Dissipation, and I have thus used this option for the "rewrite" branch.<br clear="all"><div><br></div><div>Here are the high-level results:</div><div><br></div><div>current: 3.03136e-06 sec per grid point</div><div>rewrite: 2.85734e-06 sec per grid point</div><div><br></div><div>That is, the rewrite branch is about 5% faster.</div></div></div></blockquote><div><br></div></span>Hi Erik,</div><div><br></div><div>That is very reassuring! However, for production use, I would be more interested in 6th or 8th order finite differencing (where the advection stencils become very large), and with Jacobians. If 8th order with Jacobians is at least a similar speed with the rewrite branch, then I would be happy with switching.</div></div></blockquote><div><br></div><div>Ian</div><div><br></div><div>Do you want to suggest a particular benchmark parameter file?</div><div><br></div><div>-erik</div></div><div><br></div>-- <br><div>Erik Schnetter <<a href="mailto:schnetter@cct.lsu.edu" target="_blank">schnetter@cct.lsu.edu</a>><br><a href="http://www.perimeterinstitute.ca/personal/eschnetter/" target="_blank">http://www.perimeterinstitute.ca/personal/eschnetter/</a></div>
</div></div>
</div></blockquote></div></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Erik Schnetter <<a href="mailto:schnetter@cct.lsu.edu" target="_blank">schnetter@cct.lsu.edu</a>><br><a href="http://www.perimeterinstitute.ca/personal/eschnetter/" target="_blank">http://www.perimeterinstitute.ca/personal/eschnetter/</a></div>
</div>