[Users] Failure to run static_tov simulation using more than 1 node on cluster
Steven R. Brandt
sbrandt at cct.lsu.edu
Thu Jul 6 15:00:13 CDT 2023
If your ini file is not part of the simfactory checkout (i.e. we don't
know about it), you may have to edit by hand. I think simfactory
attempts to run lscpu to determine some of these parameters, but if it
is not present 1 may be the default.
If you are part of a group, it may be beneficial to have your ini file
checked in to simafactory (after it is debugged and tested) so that
other members of your group have access to it. We can help make that happen.
--Steve
On 6/20/2023 3:01 PM, Wei Sun wrote:
> Hello,
>
> I am trying to run the static_tov simulation on my university's
> cluster (CRC Notre Dame
> <https://docs.crc.nd.edu/new_user/quick_start.html>). However, I got
> this error when acquiring 48 cores:
>
> Error: Too many nodes specified: nodes=2 (maxnodes is 1)
>
> I am assuming this is due to the /#Source tree management/ module in
> the Cactus/simfactory/mdb/machines/<host>.ini file. In my case, the
> automatic *.ini file I got has these configuration:
>
> ppn = 24
>
> max-num-threads = 24
>
> num-threads = 12
>
> nodes = 1
>
>
> However, the machine I use has 24 cores for each node, and the number
> of threads for each core is 1. That means if I run the simulation
> using 48 cores, I need 2 nodes.
>
> Although I didn't get 2 hosts successfully(I got only 1), I am
> wondering if I have to edit the <host>.ini file after the command
> ./simfactory/bin/sim setup-silent by hand, for example: num-threads=1,
> nodes=2? Or is there any other correct way for getting more than 1 node?
>
> I would be appreciate it if you could give me some ideas on this problem.
>
> Best regards,
> Wei
>
>
> _______________________________________________
> Users mailing list
> Users at einsteintoolkit.org
> http://lists.einsteintoolkit.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.einsteintoolkit.org/pipermail/users/attachments/20230706/8842adc7/attachment-0001.htm>
More information about the Users
mailing list