[Users] problem on running bbh expample
Erik Schnetter
schnetter at cct.lsu.edu
Fri Dec 23 10:42:06 CST 2016
Cory
This indicates a severe problem. The most likely reason is that there
are two processes started, but they don't "find" each other, and then
both run independently thinking that there are the only process.
This can be caused by a mixup of MPI implementation. If you have two
MPI implementations installed, then you might be using one of them to
compile, but you might be using the "mpirun" command of the other one.
You will need to correct this for things to work. It's difficult to
tell from the outside what went wrong. The output from Cactus when it
configured should/might have stated where the MPI implementation is
located (e.g. "/usr/local"), and if you type "which mpirun", it should
tell you which you are using (maybe "/usr").
-erik
On Fri, Dec 23, 2016 at 11:07 AM, Cory Chu <cory.yk.chu at gmail.com> wrote:
> Erik
>
> Actually, even with --procs=2 --num-threads=1
> I will get:
> The environment variable CACTUS_NUM_PROCS is set to 2, but there are 1 MPI
> processes.
>
> However, with only --procs=2 can really start a simulation, and terminate
> due to TAT/Slab limitation
>
> Is there any configuration corresponding to
> "but there are 1 MPI processes. This may indicate a severe problem with the
> MPI startup mechanism."
>
> I'm using
> optionlist ubuntu.cfg
> submitscript generic.sub
> runscript debian.run
>
> Cory
>
>
> On Fri, Dec 23, 2016 at 9:44 PM, Erik Schnetter <schnetter at cct.lsu.edu>
> wrote:
>>
>> Cory
>>
>> Using "--procs=4 --num-threads=1" might help. You do not need to
>> change the settings in the ".ini" file; only the command line
>> parameters determine how many MPI processes are used.
>>
>> -erik
>>
>> On Fri, Dec 23, 2016 at 8:12 AM, Cory Chu <cory.yk.chu at gmail.com> wrote:
>> > Hi Ian
>> >
>> > I'am running on VirtualBox set whit 2 Processors and 8GB RAM
>> >
>> > When i try with this submit command
>> >
>> > create-submit mybbh --parfile par/BBHLowRes.par --num-threads=1
>> > --procs=2
>> >
>> > I get following message in .err file
>> >
>> > WARNING level 0 from host kpc-VirtualBox process 0
>> > while executing schedule bin (none), routine (no thorn)::(no routine)
>> > in thorn Carpet, file
>> >
>> > /home/kuangtly/programming/ET/forfun/Cactus/arrangements/Carpet/Carpet/src/SetupGH.cc:153:
>> > -> The environment variable CACTUS_NUM_PROCS is set to 2, but there
>> > are 1
>> > MPI processes. This may indicate a severe problem with the MPI startup
>> > mechanism.
>> > cactus_sim:
>> >
>> > /home/kuangtly/programming/ET/forfun/Cactus/arrangements/Carpet/Carpet/src/helpers.cc:275:
>> > int Carpet::Abort(const cGH*, int): Assertion `0' failed.
>> > Rank 0 with PID 15985 received signal 6
>> > Writing backtrace to BBHLowRes/backtrace.0.txt
>> > Aborted (core dumped)
>> >
>> >
>> > i have change these setting in .ini file
>> > ppn =2
>> > max-num-threads =2
>> > num-threads =2
>> >
>> >
>> >
>> > On Fri, Dec 23, 2016 at 6:28 PM, Ian Hinder <ian.hinder at aei.mpg.de>
>> > wrote:
>> >>
>> >>
>> >> On 23 Dec 2016, at 10:52, Cory Chu <cory.yk.chu at gmail.com> wrote:
>> >>
>> >> Erik
>> >>
>> >> Should I simply change the .ini file in simfactory/mdb/machines
>> >> with
>> >> ppn = 2
>> >> max-num-threads = 2
>> >> num-threads = 2
>> >>
>> >> and adding --procs = 2 on running command ?
>> >>
>> >> sorry i just can not find a detail documentation on this part
>> >> after i change the above setting
>> >> the problem did not solved
>> >>
>> >>
>> >> Hi Cory,
>> >>
>> >> Can you give the command line you are using to submit the simulation?
>> >> You
>> >> need to ensure that the number of MPI processes you use is larger. You
>> >> could do this by running fewer threads per process, or if you are
>> >> already
>> >> running one thread per process, by increasing the number of processes
>> >> by
>> >> increasing the number of threads. That might mean reconfiguring the
>> >> machine
>> >> definition to allow more cores to be used than you actually have. What
>> >> machine is it; i.e. how many processors and cores do you have?
>> >>
>> >> --
>> >> Ian Hinder
>> >> http://members.aei.mpg.de/ianhin
>> >>
>> >
>>
>>
>>
>> --
>> Erik Schnetter <schnetter at cct.lsu.edu>
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>
>
--
Erik Schnetter <schnetter at cct.lsu.edu>
http://www.perimeterinstitute.ca/personal/eschnetter/
More information about the Users
mailing list