<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p>Present: Steve B, Roland H, Yosef Z, Zhichao Z, Maria H, Liu H, Christopher E, Alois Peter S, Beyhan,
Zach E, Helvi, Kadri Y
</p>
<p>Chair: Steve
Minutes: Bill (Helvi next week, April 9th)
</p>
<p> SB-Ian Hinder is working on replacement for cactuscode.org, test.cactuscode.org. Please take a
look, make comments.</p>
<p> SB-What do we use SVN for? External libs, but they are on github / bitcode. RH-High GW Analysis
is on those SVN servers. We could make a Git copy of it. SB nice to get rid of another server.
</p>
<p>* update on potential codes to be included in next release, from minutes [RH]</p>
<p>** BaikalETK (Zach, Helvi)</p>
<p> ZE-Intel17 performance regression, Zach looking into it and could not fix, but no problem with Intel19.
Baikal on Intel19 is faster than McLachlan with GCC. Slowest of the four is McLachlan with GCC which is
20% slower running on one high HPC node and 4 MPI processes. Did not test Intel18, no access. For
performance, have to disable dependency checking with vectorization...in Intel17 causes the compiler
to segfault---does give warning message. For Intel19 these problems are resolved. Zach did put in a
#ifdef and it will compile with Intel17 but performance is half what it should be.</p>
<p> Baikal is in a much better state than a week ago.
</p>
<p style=" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;"> NRPy can now reliably generate C code kernels in parallel in ~6 minutes on an older i7 machine. Zach
fixed a problem whereby sometimes the parallel codegen didn't work, resulting in a fallback serial codegen
mode being called and serial codegen could take ~ an hour. </p>
<p><style type="text/css">
p, li { white-space: pre-wrap; }</style></p>
<p> It would be good to know which clusters are using the older Intel compiler.
</p>
<p> Helvi is official reviewer. Zach says ready for Helvi's review.
</p>
<p> SB-ETK proposal was awarded for another 4 years!
</p>
<p>**PreSync (Steve, Roland)</p>
<p> SB-Not a lot to say about PreSync. Adding the missing deps from Giraffe. RH-I am reviewing it and finding
where things break, a lot of places to fix things.
</p>
<p>* timeline of upcoming release[RH] <span><a class="moz-txt-link-freetext" href="https://docs.einsteintoolkit.org/et-docs/Release_Details#Schedule_for_ET_2020_05">https://docs.einsteintoolkit.org/et-docs/Release_Details#Schedule_for_ET_2020_05</a> </span></p>
<p> Release data listed as May 31st, a Monday. Previous dates have shuffled around. Do not put off work for the
release, we are all busy with many things.</p>
<p>* Release of testsuite results <a class="moz-txt-link-freetext" href="http://einsteintoolkit.org/testsuite_results/index.php">http://einsteintoolkit.org/testsuite_results/index.php</a></p>
<p> SB has not moved Docker images around yet. RH-Found likelihood to get code to hang on Comet, not always
the same. LLama patch 7 test, infinite loop by malloc, provided by the MPI lib. Not always error at the same code
place. Using OpenMPI on Comet, and seems to work. Typical mpi is Intel, OpenMPI or MPIch, not typical is MVAPICH2
which is on Comet.</p>
<p>**bugs of severity major or worse <a class="moz-txt-link-freetext" href="https://bitbucket.org/einsteintoolkit/tickets/issues?kind=bug&priority=major&priority=blocker&priority=critical&status=new&status=open">https://bitbucket.org/einsteintoolkit/tickets/issues?kind=bug&priority=major&priority=blocker&priority=critical&status=new&status=open</a></p>
<p>**trivial or minor bugs tagged for the next release <a class="moz-txt-link-freetext" href="https://bitbucket.org/einsteintoolkit/tickets/issues?kind=bug&priority=minor&priority=trivial&milestone=ET_2020_05&status=new&status=open">https://bitbucket.org/einsteintoolkit/tickets/issues?kind=bug&priority=minor&priority=trivial&milestone=ET_2020_05&status=new&status=open</a></p>
<p>**other open tickets flagged for next release <a class="moz-txt-link-freetext" href="https://bitbucket.org/einsteintoolkit/tickets/issues?kind=proposal&kind=enhancement&milestone=ET_2020_05&status=open&status=new">https://bitbucket.org/einsteintoolkit/tickets/issues?kind=proposal&kind=enhancement&milestone=ET_2020_05&status=open&status=new</a>
</p>
<p>RH-New tickets are mostly related to PreSync. </p>
<p>*gallery testing: tasks <a class="moz-txt-link-freetext" href="https://bitbucket.org/einsteintoolkit/tickets/issues?kind=task&milestone=ET_2020_05">https://bitbucket.org/einsteintoolkit/tickets/issues?kind=task&milestone=ET_2020_05</a></p>
<p>** volunteers needed, please volunteer (someone)[RH]</p>
<p>**TOV: Brock Brendal (UIUC)</p>
<p>**BNS: Shawn Rosofsky (UIUC)</p>
<p>**BBH: A. Peter S., Runs fine on one system, but some units problem.</p>
<p>**Poisson: Gabella (Vanderbilt)</p>
<p>**Multipatch scalar wave: TBD Let us find someone!!</p>
<p>SB-Has a Python script RunOne that will run one of the tests at a time. It is useful. Shows the MPI command and you
can re-run just it. He can make it available.</p>
<p>SB-Replacement machine for the tutorial server. Quote from Dell on a little workstation, maybe 16 cores and 32 GB
Ram and Jupyter hub could be on that. Wants it to be its own machine because of the security risk, and wants it walled
off in its own box. We have some overhead funds at LSU that we might be able to use. The current server is still
running, but not ideal.
</p>
<p>* Unanswered question on mailing list</p>
<p>Scalar fields email, Zach had a Skype with him. Helvi might show her code that couples scalar field to other.
</p>
<p>* Open Tickets</p>
<p>*Tickets ready for review
</p>
<p>SB ETK workshop, 3-7 August 2020 at LSU, kept the announcement, and unsure if it will happen or not. Looking for lecturers
and tutors.
</p>
<br>
<pre class="moz-signature" cols="72">--
=====================================
William Gabella
Research Assistant Professor
Department of Physics and Astronomy
Vanderbilt University
Nashville, TN USA
<a class="moz-txt-link-abbreviated" href="mailto:b.gabella@vanderbilt.edu">b.gabella@vanderbilt.edu</a>
(o) 615-343-2713</pre>
</body>
</html>