[Commits] [svn:einsteintoolkit] Paper_EinsteinToolkit_2010/ (Rev. 161)
knarf at cct.lsu.edu
knarf at cct.lsu.edu
Sun Nov 6 00:30:02 CDT 2011
User: knarf
Date: 2011/11/06 12:30 AM
Modified:
/
ET.tex
Log:
a lot of small details
File Changes:
Directory: /
============
File [modified]: ET.tex
Delta lines: +18 -18
===================================================================
--- ET.tex 2011-11-06 05:20:20 UTC (rev 160)
+++ ET.tex 2011-11-06 05:30:02 UTC (rev 161)
@@ -182,7 +182,7 @@
either the GHG formalism (Caltech/Cornell, LSU/BYU/LIU, and Princeton) or
BSSN ``moving puncture'' gauge (or variants thereof; AEI/Sissa, Illinois,
Jena, and Tokyo/Kyoto). Nearly all include some form of adaptive mesh
-refinement, since unigrid models cannot produce accurate longterm evolutions
+refinement, since unigrid models cannot produce accurate long-term evolutions
without requiring exorbitant computational resources. Many groups' codes
now include GRMHD (used widely for NS-NS mergers, and for BH-NS mergers
in \cite{Chawla:2010sw}, and some include microphysical effects as well.
@@ -311,7 +311,7 @@
included in the Einstein Toolkit,
are publicly available. An important task going forward is
to facilitate the coupling of existing and future GRMHD codes
- with AMR to avoid underresolving the
+ with AMR to avoid under-resolving the
dynamics in the systems under investigation. AMR methods are
often much more complicated than uniformly distributed
mesh methods, and require sophisticated algorithms to make use of
@@ -409,7 +409,7 @@
Framework~\cite{Cactuscode:web,Goodale:2002a,CactusUsersGuide:web} is
an open source, modular, portable programming environment for
collaborative HPC computing, primarily developed at Louisiana State University\@.
-The generic, parallel {\tt Cactus} computationa toolkit consists of modules providing
+The generic, parallel {\tt Cactus} computational toolkit consists of modules providing
parallel drivers, coordinates, boundary conditions, interpolators,
reduction operators, and efficient I/O in different data
formats. Generic interfaces are used, enabling the usage of
@@ -529,7 +529,7 @@
Today's supercomputers differ significantly in
their hardware configuration, available software, directory structure,
-queueing system, queuing policy, and many other user-visible
+queuing system, queuing policy, and many other user-visible
properties. In addition, the system architectures and user interfaces
offered by supercomputers are very different to those offered by
laptops or workstations. This makes performing large,
@@ -622,7 +622,7 @@
multipatch domains with the Llama\cite{Pollney:2009yz} code; (iii) automatic generation of
vectorized code, where the equations are evaluated simultaneously by
the processor for two grid points at the same time, and (iv) common
-subexpression elimination, and various other optimization strategies.
+sub-expression elimination, and various other optimization strategies.
Within the Einstein Toolkit, the Einstein evolution thorn McLachlan,
as well as the wave extraction thorn WeylScal4, are both generated
@@ -870,7 +870,7 @@
module~\cite{Ansorg:2004ds}, commonly
used in numerical relativity to generate BH-BH binary initial data, which makes
use of the GNU Scientific Library [GSL;~\cite{GSL:web,Galassi:2009}].
-Several modules have also been implemented to read in datafiles generated by
+Several modules have also been implemented to read in data files generated by
the {\tt Lorene} code~\cite{Lorene:web,Gourgoulhon:2000nn}.
Initial data setup is in most cases clearly separated from the
@@ -1008,12 +1008,12 @@
rapid falloff at large radii to yield convergent solutions. Around each object,
a set of nested spheroidal sub-domains (see Fig.~\ref{fig:Lorene_coordinates}) is constructed extending to cover all
of space, with the outermost domain incorporating a compactification to allow
-it to extend to spatial infinity. Within each of the nested subdomains,
+it to extend to spatial infinity. Within each of the nested sub-domains,
fields are decomposed into Chebyshev modes radially and spherical harmonics
in the angular directions, with elliptic equation solving reduced to a matrix
problem. The nested sub-domains are not required to be perfectly spherical, and
indeed one may modify the outer boundaries of each to cover any convex shape.
-For NSs, this allows one to map the surface of a particular subdomain
+For NSs, this allows one to map the surface of a particular sub-domain
to the NS surface, minimizing Gibbs error there. For BHs, excision
boundary conditions are imposed at the horizon. To read a field solution
describing a given quantity onto a {\tt Cactus}-based grid, one must incorporate the data
@@ -1359,7 +1359,7 @@
terms appear in the evolution equations for the conserved variables,
they are restricted to appear only inside divergence terms, which are
referred to as fluxes, and never in the source terms. By calculating fluxes at cell faces, we may
-obtain a consistent description of the intercell values using
+obtain a consistent description of the inter-cell values using
reconstruction techniques that account for the fact that hydrodynamic
variables are not smooth and may not be finite differenced accurately.
All other source terms in the evolution equations may contain only the
@@ -1516,7 +1516,7 @@
solver \cite{Harten:1983on,Einfeldt:1988og} (HLL or HLLE, depending on the reference),
which uses a two wave approximation to calculate the evolution along
the shock front. With $\xi_-$ and $\xi_+$ the most negative and
-most positive wavespeeds present on either side of the interface,
+most positive wave speeds present on either side of the interface,
the solution $q(\xi)$ is assumed to take the form
\begin{equation}
\label{hlle1}
@@ -1628,7 +1628,7 @@
An equation of state connecting the primitive state variables is
needed to close the system of GR hydrodynamics equations. The module
\codename{EOS\_Omni} provides a unified general equation of state
-(EOS) interface and backend for simple analytic and complex
+(EOS) interface and back-end for simple analytic and complex
microphysical EOSs.
The polytropic EOS,
@@ -1873,7 +1873,7 @@
The module \codename{QuasiLocalMeasures} implements
the calculation of mass and spin multipoles from the isolated
and dynamical horizon formalism~\cite{Dreyer:2002mx, Schnetter:2006yt},
-as well as a number of other proposed formulae for quasilocal mass, linear
+as well as a number of other proposed formul\ae for quasilocal mass, linear
momentum and angular momentum that have been advanced over the
years~\cite{Szabados:2004ql}. Even though only a few rigorous proofs exist
that establish the properties of these latter quantities, they have
@@ -2038,7 +2038,7 @@
for $T_{\mu\nu}$.
Finally, \codename{ADMAnalysis} calculates a variety of derived spacetime
-quantities that are often useful in postprocessing such as the determinant
+quantities that are often useful in post-processing such as the determinant
of the 3-metric $\det{\gamma}$, the trace of the extrinsic curvature $K$,
the 3-Ricci tensor in Cartesian coordinates $\mathcal{R}_{ij}$ and its trace
$\mathcal{R}$, as well as the 3-metric and extrinsic curvature converted to
@@ -2302,7 +2302,7 @@
\ref{fig:kerr_waves_l4} we can mention that the extracted $\ell =4, m=4$ mode
is non-zero due to truncation error, but shows fourth order convergence to
zero with resolution (this mode is not present in the initial data and is not
-excited during the evolution). Other modes are zero to roundoff due to
+excited during the evolution). Other modes are zero to round-off due to
symmetries at all resolutions.
Since there is non-trivial gravitational wave content in the initial data
@@ -2446,7 +2446,7 @@
these were computed to be $4$-th order accurate in this example.
The second module, \codename{Multipole}, interpolates the
Weyl scalars into spheres with centers and radii specified by
-the user, and performs a spherical harmonic multipolar
+the user, and performs a spherical harmonic multi-polar
mode decomposition on these spherical surfaces.
Figure~\ref{fig:tracks_waveform} shows the
real and imaginary parts of the ($l=2$, $m=2$) mode for
@@ -2657,7 +2657,7 @@
center and a coordinate radius of $r=5\mathrm{M}$ which is about half-way between the
center and the surface. The observed convergence rate for most of the
simulation time lies between $1.4$ and $1.5$ at the center, and between $1.6$ and
-$2$ at $r=5\mathrm{M}$, consistent with the expected data-dependend convergence
+$2$ at $r=5\mathrm{M}$, consistent with the expected data-dependent convergence
order of the underlying hydrodynamics evolution scheme.
\begin{figure}
@@ -2695,7 +2695,7 @@
\rho' + K (\rho')^2 = \rho + K_{ID} \rho^2.
\label{eqn:collapse_rho_rescaled}
\end{equation}
-Compared to the initial configuration, this rescaled star posseses a higher
+Compared to the initial configuration, this rescaled star possesses a higher
central density and lower pressure.
This change in $K$ speeds up the collapse and provides a
physical trigger for the collapse rather relying on random numerical
@@ -2714,7 +2714,7 @@
% radii: 2M,4M,8M,13.6M(!),32M,64M respectively. Changing it to 16M
% doesn't really do any good or harm.
level is twice that of the surrounding level. In order to resolve the large
-density gradients developping during the collapse, two more levels with radii
+density gradients developing during the collapse, two more levels with radii
$4\,M_\odot$ and $2\,M_\odot$ are present inside of the star. We use the PPM
reconstruction method and the HLLE Riemann solver to obtain second
order convergent results in smooth regions. Due to the presence of the
More information about the Commits
mailing list