[Users] Patch: Use smaller chunk size for Carpet HDF5 I/O

Erik Schnetter schnetter at cct.lsu.edu
Thu Jun 24 17:05:26 CDT 2010


Following up on a discussion on the AEI Damiana mailing list I suggest this patch, which should reduce the memory requirements for recovering from compressed checkpoint files:



Reduce the chunk size when writing 3D HDF5 files by setting the z extent to 1.

HDF5 datasets have to be stored in chunks when checksums or compression is used.  The chunk size determines the memory requirements for reading files.

-erik

-- 
Erik Schnetter <schnetter at cct.lsu.edu>   http://www.cct.lsu.edu/~eschnett/


-------------- next part --------------
A non-text attachment was scrubbed...
Name: CarpetIOHDF5.diff
Type: application/octet-stream
Size: 3430 bytes
Desc: not available
Url : http://lists.einsteintoolkit.org/pipermail/users/attachments/20100624/baeaa65c/attachment.obj 


More information about the Users mailing list