bwUniCluster : mpi/openmpi/2.1-gnu-7.1

Suchen (ALLE CLUSTER)

Cluster-Übersicht
Suchen (pro Cluster)
 
Ankündigungen
Neue Programme  < 14 Tage
alles anzeigen/reset

Cluster-Verwaltung

Hilfe
Login für
Software-Admins
zurück
 

bwHPC Links
  E-Mail an CIS-Admin
  bwHPC Support-Portal
  bwHPC WIKI
  bwHPC Hauptseite
zurück  Dieses Programm auf allen Clustern suchen
Kategorie/Name/Versionmpi/openmpi/2.1-gnu-7.1
ClusterbwUniCluster      (für ALLE verfügbar!)
Nachricht/Übersicht
Konflikte
Preregs =Abhängigkeiten
conflict mpi/impi
conflict mpi/mvapich
conflict mpi/mvapich2
conflict mpi/openmpi
Lizenz
URL
What-ismpi/openmpi/2.1-gnu-7.1: OpenMPI bindings (mpicc mpicxx mpifort) version 2.1.3-gnu-7.1 for gnu/7.1
Hilfe-Text
This module provides the Open MPI Message Passing Interface bindings (mpicc,
mpicxx, mpifort (mpif77 and mpif90)). The corresponding compiler suite 
(gnu/7.1) module should (and automatically will) be loaded first.

Documentation:

  See man pages of mpicc, mpicxx, mpifort (mpif77, mpif90) and mpirun, 
  e.g. 'man mpicc'. 
  For additional help see 'http://www.open-mpi.org/faq/'.

Compiling and executing MPI-programs:

  Instead of the usual compiler commands, you should compile and link your
  mpi-program with mpicc, mpicxx, mpifort (mpif77 and mpif90). What e.g. 
  'mpicc' is really doing can be displayed via command 'mpicc -show'.

  Frequently one needs '-I${MPI_INC_DIR}' in addition.

The MPI-libraries can be found in

  $MPI_LIB_DIR

Example for compiling an MPI program with Open MPI:

  module load mpi/openmpi/2.1-gnu-7.1
  mpicc   -O3 mpi_application.c   -o mpi_application.exe # for C programs
  mpicxx  -O3 mpi_application.cxx -o mpi_application.exe # for C++ programs
  mpifort -O3 mpi_application.f   -o mpi_application.exe # for Fortran programs
  # GNU: Optimization level '-O3' results in substantially faster executables.
  For additional help see 'http://www.open-mpi.org/faq/?category=mpi-apps'.

Example for executing the MPI program using 4 cores on the local node:

  module load mpi/openmpi/2.1-gnu-7.1
  mpirun -np 4 `pwd`/mpi_application.exe

Example PBS snippet for submitting the program on 2 x 8 = 16 cores:

  #PBS -l nodes=2:ppn=8
  module load mpi/openmpi/2.1-gnu-7.1
  mpirun $PBS_O_WORKDIR/mpi_application.exe

The mpirun command automatically determines the number of workers
and the hosts for the workers via PBS-TM and/or $PBS_NODEFILE.

For details on library and include directories please call
    module show mpi/openmpi/2.1-gnu-7.1

The man pages, environment variables and compiler commands
are available after loading 'mpi/openmpi/2.1-gnu-7.1'.

In case of problems, please contact 'bwunicluster-hotline (at) lists.kit.edu'
or submit a trouble ticket at http://www.support.bwhpc-c5.de.

The full version is: 2.1.3-gnu-7.1
SupportbwHPC Support-Portal
Installationsdatum05.10.2017
Löschdatum
Best-Practice-Wiki