bwUniCluster : mpi/impi/2018-gnu-5.2

Suchen (ALLE CLUSTER)

Cluster-Übersicht
Suchen (pro Cluster)
 
Ankündigungen
Neue Programme  < 14 Tage
alles anzeigen/reset

Cluster-Verwaltung

Hilfe
Login für
Software-Admins
zurück
 

bwHPC Links
  E-Mail an CIS-Admin
  bwHPC Support-Portal
  bwHPC WIKI
  bwHPC Hauptseite
zurück  Dieses Programm auf allen Clustern suchen
Kategorie/Name/Versionmpi/impi/2018-gnu-5.2
ClusterbwUniCluster      (für ALLE verfügbar!)
Nachricht/Übersicht
Konflikte
Preregs =Abhängigkeiten
prereq   compiler/gnu
conflict mpi/impi
conflict mpi/mvapich
conflict mpi/mvapich2
conflict mpi/openmpi
LizenzCommercial(install-doc/EULA.txt)
URL
What-ismpi/impi/2018-gnu-5.2: Intel MPI bindings (mpicc mpicxx mpif77 mpif90) version 2018.4.274 for GNU compiler suite 5.2
Hilfe-Text
This module provides the Intel Message Passing Interface (mpicc, mpicxx, mpif77
and mpif90) version '2018.4.274' for the GNU compiler suite (gcc, g++ and
gfortran) (see also 'http://software.intel.com/en-us/intel-mpi-library').

The corresponding GNU compiler module (version 5.2) is loaded automatically.
Other GNU compiler modules as well as the system GNU compiler should work too.

Local documentation:

  See 'mpicc -help', 'mpicxx -help', 'mpif77 -help', 'mpif90 -help',
  'mpirun -help', 'mpiexec -help', 'mpiexec.hydra --help' and all man pages in
    $MPI_MAN_DIR = /opt/bwhpc/common/compiler/intel/compxe.2018.5.274/impi/2018.4.274/man
  as well as 'User_Guide.pdf', 'Reference_Manual.pdf' and 'get_started.htm' in
    $MPI_DOC_DIR = /opt/bwhpc/common/compiler/intel/compxe.2018.5.274/documentation_2018/en/mpi

Compiling and executing MPI-programs:

  Instead of the usual compiler commands, you should compile and link your
  mpi-program with mpicc, mpicxx, mpif77 and mpif90. What e.g. 'mpif90' is
  really doing can be displayed via command 'mpif90 -show'.

  Sometimes one might need '-I${MPI_INC_DIR}' in addition.

The MPI-libraries can be found in

  $MPI_LIB_DIR = /opt/bwhpc/common/compiler/intel/compxe.2018.5.274/impi/2018.4.274/intel64/lib

Example for compiling an MPI program with Intel MPI:

  module load mpi/impi/2018-gnu-5.2
  cp -v $MPI_EXA_DIR/{pi3.f,pi3.c,pi3.cxx} ./
  mpicc  pi3.c   -o pi3.exe # for C programs
  mpicxx pi3.cxx -o pi3.exe # for C++ programs
  mpif90 pi3.f   -o pi3.exe # for Fortran programs
  # Intel: Default optimization level (-O2) results in quite good performance.

Example for executing the MPI program using 4 cores on the local node:

  module load mpi/impi/2018-gnu-5.2
  mpirun -n 4 `pwd`/pi3.exe

Example MOAB snippet for submitting the program on 2 x 16 = 32 cores:

  #MSUB -l nodes=2:ppn=16
  module load mpi/impi/2018-gnu-5.2
  mpirun -n $MOAB_PROCCOUNT $MOAB_SUBMITDIR/pi3.exe

The mpirun command automatically executes the three required commands mpdboot,
mpiexec and mpdallexit. It automatically boots the mpd daemons on the nodes as
defined by $SLURM_NODELIST or $PBS_NODEFILE. For more control (e.g. distribution
of workers), the commands mpdboot, mpiexec and mpdallexit can be called instead.

For details on how to use Intel MPI please read
  $MPI_DOC_DIR/User_Guide.pdf and $MPI_DOC_DIR/get_started.htm

For details on library and include dirs please call
    module show mpi/impi/2018-gnu-5.2

Troubleshooting:
* If a job with many MPI workers (e.g 32) seems to hang during initialization,
  please try a different combination of GNU compiler and impi.
* If the STDOUT of your MPI GNU Fortran program is displayed with delay,
  you can disable the stdout buffering of the program, e.g. via
    export GFORTRAN_UNBUFFERED_ALL=1

The environment variables and commands are available after loading this module.
In case of problems, please contact 'bwunicluster-hotline (at) lists.kit.edu'.
SupportbwHPC Support-Portal
Installationsdatum15.12.2017
Löschdatum
Best-Practice-Wiki