<div dir="ltr">Hi Ronal,<div><br></div><div>Thanks for the clarification.</div><div><br></div><div>Cheers,</div><div>Benja</div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Oct 24, 2018 at 5:48 PM Roland Haas <<a href="mailto:rhaas@illinois.edu">rhaas@illinois.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello Benja,<br>
<br>
you will still see both black holes. Half this size is really "reduce<br>
the radius of the sphere by half" and not showing only one half of the<br>
domain.<br>
<br>
Eg the original rpar file may have had a domain that goes out to a<br>
radius of 2000M while the new one goes only to 1000M. The black holes<br>
are never further apart than about 20M or so though so they are always<br>
included.<br>
<br>
Yours,<br>
Roland<br>
<br>
> Hi Ronal,<br>
> <br>
> Many thanks for your help, we are going to try the suggested rpar.<br>
> Just one question, if we use "domain half its current size" as you<br>
> suggested, when we will plot phi.*.xy.h5 result files we will see the two<br>
> "black holes" as figure "<br>
> <a href="https://docs.einsteintoolkit.org/et-docs/File:vt-5.png" rel="noreferrer" target="_blank">https://docs.einsteintoolkit.org/et-docs/File:vt-5.png</a>"<br>
> or just one ?<br>
> <br>
> Cheers,<br>
> Benja<br>
> <br>
> <br>
> On Tue, Oct 23, 2018 at 7:47 PM Roland Haas <<a href="mailto:rhaas@illinois.edu" target="_blank">rhaas@illinois.edu</a>> wrote:<br>
> <br>
> > Hello Benja,<br>
> ><br>
> > attached please find a modified rpar file where I made two changes:<br>
> ><br>
> > * changed the boundary condition to be of Robin type instead of<br>
> > Dirichlet type, which reduces reflections on the boundary (the line<br>
> > NewRad::z_is_radial = "yes")<br>
> > * made the domain half its current size which reduces memory footprint<br>
> > and runtime but will induce some reflections off the boundary, this<br>
> > makes the simulation smaller so that is uses less memory<br>
> > * then ran with very low resolution (N=24 instead of N=28) this makes<br>
> > the simulation runs faster<br>
> ><br>
> > I gave it a test run on my workstation (12 cores, 96GB of RAM) and it<br>
> > runs at ~4.1 M/hour. Since the full simulation is<br>
> > about 1000 M this will finish in 10 days.<br>
> ><br>
> > If this is too slow (which is may well be) then you can try and reduce<br>
> > the finite difference order to 6 from 8 by changing the lines (they<br>
> > are not consecutive in the file):<br>
> ><br>
> > Driver::ghost_size = 5<br>
> > Coordinates::patch_boundary_size = 5<br>
> > Coordinates::additional_overlap_size = 3<br>
> > Coordinates::outer_boundary_size = 5<br>
> > ML_BSSN::fdOrder = 8<br>
> > SummationByParts::order = 8<br>
> > Interpolate::interpolator_order = 5<br>
> > WeylScal4::fdOrder = 8<br>
> > to:<br>
> ><br>
> > Driver::ghost_size = 4<br>
> > Coordinates::patch_boundary_size = 4<br>
> > Coordinates::additional_overlap_size = 3<br>
> > Coordinates::outer_boundary_size = 4<br>
> > ML_BSSN::fdOrder = 6<br>
> > SummationByParts::order = 6<br>
> > Interpolate::interpolator_order = 3<br>
> > WeylScal4::fdOrder = 6<br>
> ><br>
> > which gives me a run speed of ~6.9M/hr (so 7 days runtime).<br>
> ><br>
> > This is the command line to start the simulation:<br>
> ><br>
> > simfactory/bin/sim create-submit GW150914_24 --define N 24 \<br>
> > --parfile ~/runs/devel/GW150914.rpar --procs 12 --walltime 24:00:00<br>
> ><br>
> > Yours,<br>
> > Roland<br>
> > <br>
> > > Dear friends,<br>
> > ><br>
> > > We are trying to use the EinsteinToolKit GW150914.rpar binary<br>
> > > balckhole merge simulation as use case to test that our container<br>
> > > orchestration product OpenShift can be used for HPC.<br>
> > > Our test environment only has 30 CPUs so we need to execute that<br>
> > > simulation in a reasonable time.<br>
> > ><br>
> > > Please can you tell us how to modify GW150914.rpar in order to get a<br>
> > > less precise simulation executed in a 30 CPUs cluster in a reasonable<br>
> > > time (~ few days).<br>
> > > Now we can run the simulation GW150914.rpar using OpenMPI +<br>
> > > EinsteinToolKit, but it takes so long to be executed (~ weeks).<br>
> > ><br>
> > > We believe that GW150914.rpar EinsteinToolKit is a great use case to<br>
> > > test OpenShift for HPC, and of course we will reference to<br>
> > > EinsteinToolKit is our final report as a use case for Openshift in<br>
> > > HPC mode.<br>
> > ><br>
> > > Many thanks in advance for your help,<br>
> > > Benja<br>
> > > <br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > My email is as private as my paper mail. I therefore support encrypting<br>
> > and signing email messages. Get my PGP key from <a href="http://pgp.mit.edu" rel="noreferrer" target="_blank">http://pgp.mit.edu</a> .<br>
> > <br>
> <br>
> <br>
<br>
<br>
<br>
-- <br>
My email is as private as my paper mail. I therefore support encrypting<br>
and signing email messages. Get my PGP key from <a href="http://pgp.mit.edu" rel="noreferrer" target="_blank">http://pgp.mit.edu</a> .<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Benjamín Chardí Marco<br>Senior Red Hat Consultant<br>RHCE #<span>100</span>-<span>107</span>-<span>341</span><br><a href="mailto:bchardim@redhat.com" style="color:rgb(17,85,204)" target="_blank">bchardim@redhat.com</a><br>Mobile: 0034 654 344 878<br></div></div>