[ET Trac] [Einstein Toolkit] #764: cached output in CarpetIOSCalar
Einstein Toolkit
trac-noreply at einsteintoolkit.org
Sun Mar 4 17:18:25 CST 2012
#764: cached output in CarpetIOSCalar
--------------------------+-------------------------------------------------
Reporter: rhaas | Owner: eschnett
Type: enhancement | Status: reopened
Priority: minor | Milestone:
Component: Carpet | Version:
Resolution: | Keywords:
--------------------------+-------------------------------------------------
Comment (by rhaas):
I have not started really thinking about anything else. Keeping the files
open will require some tracking of failures due to too many open files,
which is something the code will not handle gracefully. So I think I would
have to add an option maximum_number_of_open_files or so to limit how many
files are kept open. Any further files will display the current behaviour.
Setting this parameter to 1 would restore the current behaviour and would
be the default.
Flushing at checkpoint times would be in addition to whatever flushing the
stream decides to do on its own. It ensures that one can recover from
checkpoints without loss of data. Flushing after each line is possible as
well, assuming that most of the IO latency is due to the file open call
and not present when doing IO (ie. the lustre servers either do not
attempt sync writes from different nodes or are smart enough to optimize
situations where a file is open on only a single node).
IOASCII has fewer obvious places where things could be combined. One
things might be to keep files open the way described above.
Another option would be to first collect all data that needs to be output
and hand it over to an auxilliary thread (pthread not OpenMP). With some
blocking mechanism to avoid pilling up write requests from different
iterations this would ensure there is no lag between output to IOScalar
files and say the iteration counter in stdout as well avoiding blocking IO
(since opening a file is slow and writing might be slow). This would be
very easy to implement on top of my first patch.
For now I will not do so very much since I will use both the caching and
the combining patch in my own repository for a while to see if there are
any unexpected side-effects (and if there are noticable speedups).
--
Ticket URL: <https://trac.einsteintoolkit.org/ticket/764#comment:9>
Einstein Toolkit <http://einsteintoolkit.org>
The Einstein Toolkit
More information about the Trac
mailing list