[Commits] [svn:einsteintoolkit] Paper_EinsteinToolkit_2010/ (Rev. 179)

jfaber at einsteintoolkit.org jfaber at einsteintoolkit.org
Tue Nov 8 22:29:46 CST 2011


User: jfaber
Date: 2011/11/08 10:29 PM

Modified:
 /
  ET.tex

Log:
 A few more edits to the Examples section.
 I'm pretty much done for now, the paper looks very good to me.

File Changes:

Directory: /
============

File [modified]: ET.tex
Delta lines: +31 -32
===================================================================
--- ET.tex	2011-11-08 17:31:14 UTC (rev 178)
+++ ET.tex	2011-11-09 04:29:46 UTC (rev 179)
@@ -2645,10 +2645,10 @@
 The examples in the previous subsections did not include the evolution of
 matter within a relativistic spacetime. One interesting test of a coupled
 matter-spacetime evolution is to measure the eigenfrequencies of a stable TOV
-star~\todo{cite papers also doing this}. These eigenfrequencies can be compared
+star~(see, e.g.,~\cite{Gourgoulhon:1991aa,Romero:1996aa,Shibata:1998sg,Font:2001ew,Shibata:2003iy}). \todo{Do we need more refs?  Five is decent for an e.g.}  These eigenfrequencies can be compared
 to values known from linear perturbation theory.
 
-This test uses an equilibrium configuration of a self-gravitating fluid
+We begin our simulations with  a self-gravitating fluid
 sphere, described by a polytropic equation of state. This one-dimensional
 solution is obtained by the code described in section~\ref{sec:TOVSolver}, and
 is interpolated on the three-dimensional, computational evolution grid.
@@ -2656,14 +2656,14 @@
 \codename{McLachlan} and the hydrodynamics evolution system implemented in
 \codename{GRHydro}.
 
-Specifically for this test, we set up a stable TOV star described by a
+For the test described here, we set up a stable TOV star described by a
 polytropic equation of state $p=K\rho^\Gamma$ with $K=100$ and $\Gamma=2$,
 and an initial central density of $\rho_c=1.28\times10^{-3}$. This model can
 be taken to represent a non-rotating NS with a mass of
 $M=1.4\mathrm{M}_\odot$. The computational domain is a cube of length
 $640\mathrm{M}$ with a base resolution of $2\mathrm{M}$ ($4\mathrm{M}$,
 $8\mathrm{M}$) in each dimension. Four additional grids refine the region
-around the star located at the origin, each doubling the resolution, with sizes
+around the star centered at the origin, each doubling the resolution, with sizes
 of $120\mathrm{M}$, $60\mathrm{M}$, $30\mathrm{M}$ and $15\mathrm{M}$,
 resulting in a resolution of $0.125\mathrm{M}$ ($0.25\mathrm{M}$,
 $0.5\mathrm{M}$) across the entire star.
@@ -2691,13 +2691,13 @@
 hydrodynamics simulation, compared to the corresponding frequencies as obtained
 with perturbative techniques, kindly provided to use by Kentaro Takami and
 computed using the method described in~\cite{Yoshida:1999vj}. The PSD was computed
-using the entire time series of the high-resolution run, removing the linear
-trend, averaging over Hanning windows overlapping half the signal length after
-padding the signal to five time it's length. The agreement of the frequencies
-of the fundamental mode and the first three overtones is clearly visible, but
-we are not able to show the same for higher overtones, at this resolution.
-We expect to be able to resolve higher overtones when using even higher
-resolution, but did not pursue further because of the involved computational
+using the entire time series of the high-resolution run, by removing the linear
+trend and averaging over Hanning windows overlapping half the signal length after
+padding the signal to five time its length. The agreement of the 
+ fundamental mode and first three overtone frequencies is clearly visible, but
+are limited beyond this by the finite numerical resolution.
+Higher overtones should be measurable with higher
+resolution, but at substantial computational
 cost.
 
 \begin{figure}
@@ -2718,7 +2718,7 @@
 code lies between $1$ and $2$. It cannot be higher than $2$ due to the
 directional flux-split algorithm which is of second order. Depending on
 solution itself, the hydrodynamics code is only of first order in particular
-regions, e.g., at extrema (like the center of the star), or at the star
+regions, e.g., at extrema (like the center of the star), or at the stellar
 surface.
 
 Figure~\ref{fig:tov_ham_conv} shows the order of convergence of the Hamiltonian
@@ -2736,7 +2736,7 @@
    $r=5\mathrm{M}$. The observed convergence order of about $1.5$ at the center of
    the star is lower then the general second order of the hydrodynamics
    evolution scheme. This is expected because the scheme's convergence rate drops to first
-   order at extrema or shocks, like the stellar center or the star surface.
+   order at extrema or shocks, like the stellar center or surface.
    Consequently, the observed convergence order about half-way between the
    stellar center and surface is higher than $1.5$, but most of the time below
    $2$.}
@@ -2744,13 +2744,13 @@
 
 \subsection{Collapse\pages{2 Christian, Roland}}
 \label{sec:collapse_example}
-The previous examples dealt either with a preexisting black hole (BBH) or
-with a smooth singularity free spacetime (TOV oscillations).  The evolution
+The previous examples dealt either with  preexisting BHs, either single or in a binary, or
+with a smooth singularity free spacetime, as in the case of the TOV star.  The evolution
 codes in the toolkit however are
 also able to handle the dynamic formation of a singularity as a star collapses
-into a black hole.  As a simple example of this process we study the collapse
-of a non-rotating TOV star into a black hole.  We create initial data as in
-section~\ref{sec:tov_oscilations} using $\rho_c=3.154e-3$ and $K_{ID} = 100$,
+into a BH, and as a simple example of this process we study the collapse
+of a non-rotating TOV star.  We create initial data as in
+section~\ref{sec:tov_oscillations} using $\rho_c=3.154e-3$ and $K_{ID} = 100$,
 $\Gamma = 2$ yielding a star of mass $1.67\,M_\odot$.  As is
 common in these situations~\cite{Baiotti:2005vi} we trigger collapse by
 reducing the pressure support after initial data has been constructed
@@ -2779,7 +2779,7 @@
 The outermost box has a radius of $R_0 = 204.8\,M_\odot$ and a resolution of
 $3.2\,M_\odot$ ($2.4\,M_\odot$, $1.6\,M_\odot$, $0.6\,M_\odot$ for higher
 convergence levels).
-Around the star which is centered on the origin we stack $5$ extra boxes of
+Around the star, centered on the origin, we stack $5$ extra boxes of
 approximate size $8\times2^\ell\,M_\odot$ for $0 \le \ell \le 4$, where the
 resolution on each finer
 % RH: there is a likely typo in the paramter file which creates boxes of
@@ -2819,7 +2819,7 @@
 approximate location of the star surface  and the circumferential radius of the
 apparent
 horizon once a horizon is found in the simulation.  Clearly the apparent
-horizon is found at approximately the same time as the star's size approaches
+horizon is first found at approximately the same time as when the star's size approaches
 its Schwarzschild radius.  In
 Figure~\ref{fig:tov_collapse_H_convergence_at0} we display the convergence factor
 for the Hamiltonian constraint at the center of the collapsing star.
@@ -2880,26 +2880,26 @@
 
 \section{Conclusion and Future Work}
 In this article, we described the Einstein Toolkit, a collection
-of freely available and easy to use computational codes for numerical
+of freely available and easy-to-use computational codes for numerical
 relativity and relativistic astrophysics. The code details and example
 results present in this article represent the state of the Einstein
 Toolkit in its release ET\_2011\_05 ``Curie,'' released on April 21,
 2011. 
 
-What was presented here is but a snapshot of the Einstein Toolkit's
-ongoing development whose ultimate goal it is to provide an
+The work presented here is but a snapshot of the Einstein Toolkit's
+ongoing development, whose ultimate goal it is to provide an
 open-source set of robust baseline codes to realistically and
 reproducibly model the whole spectrum of relativistic astrophysical
 phenomena including, but not limited to, isolated black holes and
 neutron stars, binary black hole coalescence in vacuum and gaseous
-environs, double neutron star and neutron star -- black hole mergers,
+environments, double neutron star and neutron star -- black hole mergers,
 core-collapse supernovae, and gamma-ray bursts.
 
 For this, much future work towards including proper treatments of
 magnetic fields, more complex equations of state, nuclear reactions,
 neutrinos, and photons will be necessary and will need to be matched
 by improvements in infrastructure (e.g., more flexible AMR on general
-grids) and computing hardware for the required fully coupled 3D,
+grids) and computing hardware for the required fully coupled 3-D,
 multi-scale, multi-physics simulations to become reality.
 
 %%% One of the desirable additions of physics is a proper treatment of
@@ -2922,18 +2922,17 @@
 whole community by integrating their CCE routines into the Einstein Tookit
 release 2011\_11 ``Maxwell,'' which will be described elsewhere.
 
-A second example for a much needed improvement of an existing method
-is to transition to cell-centered AMR for GR hydrodynamic simulations,
+A second  much needed improvement of our existing methods
+is a transition to cell-centered AMR for GR hydrodynamic simulations,
 which would allow for exact flux conservation across AMR interfaces
 via a refluxing step that adjusts coarse and/or fine grid fluxes for
 consistency (e.g., \cite{Berger:1984zza}). This is also a prerequisite
 for the constrained transport method \cite{Toth:00} for ensuring the
 divergence-free condition for the magnetic field in a future
-implementation of GRMHD within the Einstein Toolkit. Work towards
+implementation of GRMHD within the Einstein Toolkit.  Work towards
 cell-centered AMR, refluxing, and GRMHD is underway and will be
 reported in a future publication.
 
-
 %%% CDO: We have not even talked about the current MHD implementation,
 %%% how can we talk about its improvement?
 %%%
@@ -2949,7 +2948,7 @@
 
 Yet another important goal is to increase the scalability of the {\tt
   Carpet} AMR infrastructure. As we have shown, good scaling is
-limited to only a few thousand processes for some of the most used
+limited to only a few thousand processes for some of the most widely used
 simulation scenarios.  Work is in progress to eliminate this
 bottle-neck.  On the other hand, a production simulation is typically
 composed of a large number of components, and even analysis and I/O
@@ -2968,8 +2967,8 @@
 National Science Foundation under the grant numbers
 0903973/0903782/0904015 (CIGR\@).  Related grants contribute directly
 and indirectly to the success of CIGR, including NSF OCI-0721915, NSF
-OCI-0725070, NSF OCI-0905046, and NSF OCI 0941653, NSF
-AST-0855535, and NASA 08-ATFP08-0093. Results presented in this article were obtained through
+OCI-0725070, NSF OCI-0832606, NSF OCI-0905046, and NSF OCI-0941653, NSF
+AST-0855535, NSF DMS-0820923, and NASA 08-ATFP08-0093. Results presented in this article were obtained through
 computations on the Louisiana Optical Network Initiative under
 allocation loni\_cactus05 and loni\_numrel06, as well as on the NSF
 Teragrid under allocations TG-MCA02N014 and TG-PHY060027N, DOE repository m152, HLRB at



More information about the Commits mailing list