<html>#1878: hdf5 deflate fragments memory
<table style='border-spacing: 1ex 0pt; '>
<tr><td style='text-align:right'> Reporter:</td><td>Frank Löffler</td></tr>
<tr><td style='text-align:right'> Status:</td><td>new</td></tr>
<tr><td style='text-align:right'>Milestone:</td><td></td></tr>
<tr><td style='text-align:right'> Version:</td><td>development version</td></tr>
<tr><td style='text-align:right'> Type:</td><td>enhancement</td></tr>
<tr><td style='text-align:right'> Priority:</td><td>minor</td></tr>
<tr><td style='text-align:right'>Component:</td><td>Other</td></tr>
</table>
<p>Comment (by Roland Haas):</p>
<p>Since the issue is caused by code in libhdf5 there is not much we can actually do about this. Even patching <code>H5Z_filter_deflate</code> in the tar file included in ExternalLibraries/HDF5 won’t fix the issue on most clusters since on most clusters we use the system installed HDF5 library. Looking at the API call <a data-is-external-link="true" href="https://support.hdfgroup.org/documentation/hdf5/latest/group___d_c_p_l.html#gaf1f569bfc54552bdb9317d2b63318a0d" rel="nofollow">https://support.hdfgroup.org/documentation/hdf5/latest/group___d_c_p_l.html#gaf1f569bfc54552bdb9317d2b63318a0d</a> there is no option to specify the initial buffer size. </p>
<p>A patch for the self-build library is in <code>rhaas/deflate</code> of <a data-is-external-link="true" href="https://github.com/einsteinToolkit/ExternalLibraries-hdf5" rel="nofollow">https://github.com/einsteinToolkit/ExternalLibraries-hdf5</a> for all the good it might do. The 32bit size field in gzip files is no longer considered authorative (fails to record uncompressed file sizes >2GB).</p>
<p>I’d suggest to close this as “wontfix” since there is nothing that we can really do about this.</p>
<p>--<br/>
Ticket URL: <a href='https://bitbucket.org/einsteintoolkit/tickets/issues/1878/hdf5-deflate-fragments-memory'>https://bitbucket.org/einsteintoolkit/tickets/issues/1878/hdf5-deflate-fragments-memory</a></p>
</html>