Hilfe-Text | This module provides the Intel Message Passing Interface (mpicc, mpicxx, mpif77
and mpif90) version '2017.4.239' for the GNU compiler suite (gcc, g++ and
gfortran) (see also 'http://software.intel.com/en-us/intel-mpi-library').
The corresponding compiler is the system GNU compiler.
No other compiler module is loaded automatically.
Local documentation:
See 'mpicc -help', 'mpicxx -help', 'mpif77 -help', 'mpif90 -help',
'mpirun -help', 'mpiexec -help', 'mpiexec.hydra --help' and all man pages in
$MPI_MAN_DIR = /opt/bwhpc/common/compiler/intel/compxe.2017.6.256/impi/2017.4.239/man
as well as 'User_Guide.pdf', 'Reference_Manual.pdf' and 'get_started.htm' in
$MPI_DOC_DIR = /opt/bwhpc/common/compiler/intel/compxe.2017.6.256/documentation_2017/en/mpi
Compiling and executing MPI-programs:
Instead of the usual compiler commands, you should compile and link your
mpi-program with mpicc, mpicxx, mpif77 and mpif90. What e.g. 'mpif90' is
really doing can be displayed via command 'mpif90 -show'.
Sometimes one might need '-I${MPI_INC_DIR}' in addition.
The MPI-libraries can be found in
$MPI_LIB_DIR = /opt/bwhpc/common/compiler/intel/compxe.2017.6.256/impi/2017.4.239/intel64/lib
Example for compiling an MPI program with Intel MPI:
module load mpi/impi/2017-gnu-system
cp -v $MPI_EXA_DIR/{pi3.f,pi3.c,pi3.cxx} ./
mpicc pi3.c -o pi3.exe # for C programs
mpicxx pi3.cxx -o pi3.exe # for C++ programs
mpif90 pi3.f -o pi3.exe # for Fortran programs
# Intel: Default optimization level (-O2) results in quite good performance.
Example for executing the MPI program using 4 cores on the local node:
module load mpi/impi/2017-gnu-system
mpirun -n 4 `pwd`/pi3.exe
Example MOAB snippet for submitting the program on 2 x 16 = 32 cores:
#MSUB -l nodes=2:ppn=16
module load mpi/impi/2017-gnu-system
mpirun -n $MOAB_PROCCOUNT $MOAB_SUBMITDIR/pi3.exe
The mpirun command automatically executes the three required commands mpdboot,
mpiexec and mpdallexit. It automatically boots the mpd daemons on the nodes as
defined by $SLURM_NODELIST or $PBS_NODEFILE. For more control (e.g. distribution
of workers), the commands mpdboot, mpiexec and mpdallexit can be called instead.
For details on how to use Intel MPI please read
$MPI_DOC_DIR/User_Guide.pdf and $MPI_DOC_DIR/get_started.htm
For details on library and include dirs please call
module show mpi/impi/2017-gnu-system
Troubleshooting:
* If a job with many MPI workers (e.g 32) seems to hang during initialization,
please try a different combination of GNU compiler and impi.
* If the STDOUT of your MPI GNU Fortran program is displayed with delay,
you can disable the stdout buffering of the program, e.g. via
export GFORTRAN_UNBUFFERED_ALL=1
The environment variables and commands are available after loading this module.
In case of problems, please contact 'bwunicluster-hotline (at) lists.kit.edu'. |