[Users] Problem with CarpetRegrid2/AMR
Erik Schnetter
schnetter at cct.lsu.edu
Thu Sep 1 13:25:35 CDT 2011
On Thu, Sep 1, 2011 at 11:51 AM, Hal Finkel <hfinkel at anl.gov> wrote:
> On Thu, 2011-09-01 at 11:37 -0400, Erik Schnetter wrote:
>> On Thu, Sep 1, 2011 at 10:53 AM, Hal Finkel <hfinkel at anl.gov> wrote:
>> > On Tue, 2011-08-30 at 21:06 -0400, Erik Schnetter wrote:
>> >> On Tue, Aug 30, 2011 at 5:28 PM, Hal Finkel <hfinkel at anl.gov> wrote:
>> >> > Could I also decrease the block size? I currently have
>> >> > CarpetRegrid2::adaptive_block_size = 4, could it be smaller than that?
>> >> > Is there a restriction based on the number of ghost points?
>> >>
>> >> Yes, you can reduce the block size. I assume that both the regridding
>> >> operation and the time evolution will become slower if you do that,
>> >> because more blocks will have to be handled.
>> >
>> > Regardless of what I do, once we get past the first coarse time step,
>> > the program seems to "hang" at "INFO (Carpet): [ml=0][rl=0][m=0][tl=0]
>> > Regridding map 0...".
>> >
>> > Overall, it is in dh::regrid(do_init=true). It spends most of its time
>> > in bboxset<int, 3>::normalize() and, specifically, mostly in the loop:
>> > for (typename bset::iterator nsi = nbs.begin(); nsi != nbs.end(); ++
>> > nsi). The normalize() function does exit, however, so it is not hanging
>> > in that function.
>> >
>> > The core problem seems to be that it takes a long time to execute:
>> > boxes = boxes .shift(-dir) - boxes;
>> > in dh::regrid(do_init=true). Probably because boxes has 129064 elements.
>> > The coarse grid is now only 30^3 and I've left the regrid box size at 4.
>> > I'd think, then, that the coarse grid should have a maximum of 30^3/4^3
>> > ~ 420 refinement regions.
>> >
>> > What is the best way to figure out what is going on?
>>
>> Hal
>>
>> Yes, this function is very slow. I did not expect it to be
>> prohibitively slow. Are you compiling with optimisation enabled?
>
> I've tried with optimizations enabled (and without for debugging).
>
>>
>> The bboxset represents the set of refined regions, and it is
>> internally represented as a list of bboxes (regions). Carpet performs
>> set operations on these (intersection, union, complement, etc.) to
>> determine the communication schedule, i.e. which ghost zones of which
>> bbox need to be filled from which other bbox. Unfortunately, the
>> algorithm used for this is O(n^2) in the number of refined regions,
>> and set operations when implemented via lists themselves are O(n^2) in
>> the set size, leading to a rather unfortunate overall complexity. The
>> only cure is to reduce the number of bboxes (make them larger) and to
>> regrid fewer times.
>
> This is what I suspected, but nevertheless, is there something wrong?
> How many boxes do you expect that I should have? The reason that it does
> not finish, even with optimizations, is that there are 129K boxes in the
> loop (that's at least 16 billion box normalizations?).
>
> The coarse grid is only 30^3, and the regrid box size is 4, so at
> maximum, there should be ~400 level one boxes. Even if some of those
> have level 2 boxes, I don't understand how there could be 129K boxes.
The refinement structure itself should have one bbox per refined 4^3
box, and both CarpetRegrid2 and CarpetLib would try to combine these
into fewer boxes where possible, i.e. where one can form rectangles or
larger cubes. I would thus expect no more than (30/4)^2 = 64 bboxes on
level one.
Internally, one of the expensive operations is to determine which
points are buffer points. But before I explain this algorithm one
important question: Are you using buffer points? Those become
expensive, and you should be able to avoid them. First, you should be
able to just switch off this parameter; if this leads to loss of
convergence, you should be able to rewrite your equations into a
first-order system (only first spatial derivatives by adding another
evolved variable), which will most likely be stable. Not using buffer
zones should make things much cheaper.
-erik
--
Erik Schnetter <schnetter at cct.lsu.edu> http://www.cct.lsu.edu/~eschnett/
More information about the Users
mailing list