[Users] HDF5 Error, Hilbert

Ian Hinder ian.hinder at aei.mpg.de
Tue Oct 20 07:24:59 CDT 2015


On 20 Oct 2015, at 13:44, Geraint Pratten <g.pratten at sussex.ac.uk> wrote:

> Hi,
> 
> So I'm getting a problem passing the correct number of MPI processes through to Carpet. I'm not sure what I've messed up in the configurations, the error I am getting is:
> 
>> The environment variable CACTUS_NUM_PROCS is set to 4, but there are 1 MPI processes. This may indicate a severe problem with the MPI startup mechanism.
> 
> ​I've att​ached the .run and .sub scripts that I use. Can anyone see anything obviously wrong? The local cluster uses the UNIVA Grid Engine for submission scripts. 

Hi Geraint,

In your submission script, you have

#! /bin/bash
#$ -cwd
#$ -j y
#$ -o output/@SIMULATION_NAME at .out
#$ -e output/@SIMULATION_NAME at .err

## Error and ouput
#$ -m abe
#$ -M geraint.pratten at gmail.com

## Set up parallel environment
##$ -pe openmpi_mixed_32 @PROCS_REQUESTED@
#$ -pe mpich @PROC_REQUESTEDS@

## Request queue and job class
#$ -q mps.q
#$ -jc mps.medium

## Walltime, simulation name and memory
#$ -l h_rt=@WALLTIME@
#$ -N @SHORT_SIMULATION_NAME@


Specifically, you say @PROC_REQUESTED@ instead of @PROCS_REQUESTED at .  i.e. a missing S on PROCS.  Can you also post your output and error files?  It would be good if simfactory could detect errors like this.  It would also be good if the queuing system on that machine would complain if it got a number of procs that it doesn't understand.  Maybe it does complain, but it's only a warning?

I'm also not sure what happens if you do a double comment ##.  It may treat it as a single comment, and use the first pe that you specify.

-- 
Ian Hinder
http://members.aei.mpg.de/ianhin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.einsteintoolkit.org/pipermail/users/attachments/20151020/6d637f83/attachment.html 


More information about the Users mailing list