[Users] memory address limitations, using ints?
Scott Hawley
scott.hawley at belmont.edu
Wed Apr 11 01:12:42 CDT 2012
Hi. I doing runs on Nautilus with some rather large grids, which are not being broken up on more than just a few (2 to 8) MPI processes.
I'm noticing that there's a grid size of about 385^3 which I can't exceed, without Cactus/Carpet beginning to behave strangely, e.g. AHFinderDirect no longer works, and memory errors abound.
Obviously there's the possibility that I have errors in the way I specify the grids in my parameter files. But I'm wondering:
Have other people had success doing "very" large runs on, say, a shared memory architecture where MPI / domain decomposition are not employed? If so, than I've probably just made a mistake in my par file. If not...
I'm wondering if this has to do with trouble address memory locations, and I notice that the type "int" is used in places, e.g. in the definition of CFINDEX3D,..
static inline int CCTK_GFINDEX3D (const cGH *GH, int i, int j, int k);
static inline int CCTK_GFINDEX3D (const cGH *GH, int i, int j, int k)
{
return (i + GH->cctk_lsh[0]*(j + GH->cctk_lsh[1]*k));
}
Should these "int" declarations be changed to "long" or "unsigned int" in order to access larger numbers?
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20120410/ac06551d/attachment.html
More information about the Users
mailing list