[ET Trac] [Einstein Toolkit] #935: HTTPD can use large amounts of memory for its hash table of pages

Einstein Toolkit trac-noreply at einsteintoolkit.org
Tue May 29 17:56:30 CDT 2012


#935: HTTPD can use large amounts of memory for its hash table of pages
-----------------------------------+----------------------------------------
 Reporter:  rhaas                  |       Owner:     
     Type:  defect                 |      Status:  new
 Priority:  minor                  |   Milestone:     
Component:  EinsteinToolkit thorn  |     Version:     
 Keywords:  HTTPD                  |  
-----------------------------------+----------------------------------------
 I just found that for an (stripped down!) parameter file of my production
 simulations HTTPD creates a Hash table via functions in
 CACTUS_HOME/src/util/Hash.c that consumes 1GB of memory (for the array of
 pointers that is the top level hash table structure) to hold about 10000
 entries. I am not sure if this is due to a poor choice of hashing function
 (util_HashHash) or the fact that it doubles the size of the table until
 the number of entries is smaller than the number of hash slots in (in
 Util_HashRehash and Util_HashAdd). It was somewhat unexpected that a non-
 science thorn would use that much memory.

 Alternatives to use less memory might be to increase the filling factor
 ie. only rehash if hash->keys > 10*hash->fill (maybe starting from some
 limit of keys) or to use something like the binary tree implementation in
 BinaryTree.c (but not that one since it is broken in at least two places).

 I simple linear list might also be sufficient since HTTPD does not have to
 be lightning fast and serve hundreds of request per second I expect.

-- 
Ticket URL: <https://trac.einsteintoolkit.org/ticket/935>
Einstein Toolkit <http://einsteintoolkit.org>
The Einstein Toolkit


More information about the Trac mailing list