bwUniCluster : mpi/impi/4.1.3-intel-14.0

Suchen (ALLE CLUSTER)

Cluster-Übersicht
Suchen (pro Cluster)
 
Ankündigungen
Neue Programme  < 14 Tage
alles anzeigen/reset

Cluster-Verwaltung

Hilfe
Login für
Software-Admins
zurück
 

bwHPC Links
  E-Mail an CIS-Admin
  bwHPC Support-Portal
  bwHPC WIKI
  bwHPC Hauptseite
zurück  Dieses Programm auf allen Clustern suchen
Kategorie/Name/Versionmpi/impi/4.1.3-intel-14.0
ClusterbwUniCluster      (für ALLE verfügbar!)
Nachricht/Übersicht
Konflikte
Preregs =Abhängigkeiten
prereq   compiler/intel # accept other versions of the compiler too
conflict mpi/impi
conflict mpi/mvapich
conflict mpi/mvapich2
conflict mpi/openmpi
LizenzCommercial(install-doc/EULA.txt)
URL
What-ismpi/impi/4.1.3-intel-14.0: Intel MPI bindings (mpicc mpicxx mpif77 mpif90) version 4.1.3.049 for Intel compiler suite 14.0
Hilfe-Text
This module provides the Intel Message Passing Interface (mpicc, mpicxx, mpif77
and mpif90) version '4.1.3.049' for the Intel compiler suite (icc, icpc and
ifort) (see also 'http://software.intel.com/en-us/intel-mpi-library').

The corresponding Intel compiler module (version 14.0) is loaded automatically.
Other Intel compiler modules should work too.

Local documentation:

  See 'mpicc -help', 'mpicxx -help', 'mpif77 -help', 'mpif90 -help',
  'mpirun -help', 'mpiexec -help', 'mpiexec.hydra --help' and all man pages in
    $MPI_MAN_DIR = /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/impi/4.1.3.049/man
  as well as pdfs 'Getting_Started.pdf' and 'Reference_Manual.pdf' in
    $MPI_DOC_DIR = /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/impi/4.1.3.049/doc

Compiling and executing MPI-programs:

  Instead of the usual compiler commands, you should compile and link your
  mpi-program with mpicc, mpicxx, mpif77 and mpif90. What e.g. 'mpif90' is
  really doing can be displayed via command 'mpif90 -show'.

  Sometimes one might need '-I${MPI_INC_DIR}' in addition.

The MPI-libraries can be found in

  $MPI_LIB_DIR = /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/impi/4.1.3.049/intel64/lib

Example for compiling an MPI program with Intel MPI:

  module load mpi/impi/4.1.3-intel-14.0
  cp -v $MPI_EXA_DIR/{pi3.f,pi3.c,pi3.cxx} ./
  mpicc  pi3.c   -o pi3.exe # for C programs
  mpicxx pi3.cxx -o pi3.exe # for C++ programs
  mpif90 pi3.f   -o pi3.exe # for Fortran programs
  # Intel: Default optimization level (-O2) results in quite good performance.

Example for executing the MPI program using 4 cores on the local node:

  module load mpi/impi/4.1.3-intel-14.0
  mpirun -n 4 `pwd`/pi3.exe

Example MOAB snippet for submitting the program on 2 x 16 = 32 cores:

  #MSUB -l nodes=2:ppn=16
  module load mpi/impi/4.1.3-intel-14.0
  mpirun -n $MOAB_PROCCOUNT $MOAB_SUBMITDIR/pi3.exe

The mpirun command automatically executes the three required commands mpdboot,
mpiexec and mpdallexit. It automatically boots the mpd daemons on the nodes
as defined by $SLURM_NODELIST,$PBS_NODEFILE. For more control (e.g. distribution
of workers), the commands mpdboot, mpiexec and mpdallexit can be called instead.

For details on how to use Intel MPI please read
  $MPI_DOC_DIR/Getting_Started.pdf

For details on library and include dirs please call
    module show mpi/impi/4.1.3-intel-14.0

Troubleshooting:
* If a job with many MPI workers (e.g 32) seems to hang during initialization,
  try to recompile the binary with 'mpi/impi/4.1.0-intel-12.1'. First load the
  compiler, e.g. 'compiler/intel/14.0', and then 'mpi/impi/4.1.0-intel-12.1'.

The environment variables and commands are available after loading this module.
In case of problems, please contact 'bwunicluster-hotline (at) lists.kit.edu'.
SupportbwHPC Support-Portal
Installationsdatum05.10.2017
Löschdatum
Best-Practice-Wiki