bwUniCluster : chem/orca/4.2.0

Suchen (ALLE CLUSTER)

Cluster-Übersicht
Suchen (pro Cluster)
 
Ankündigungen
Neue Programme  < 14 Tage
alles anzeigen/reset

Cluster-Verwaltung

Hilfe
Login für
Software-Admins
zurück
 

bwHPC Links
  E-Mail an CIS-Admin
  bwHPC Support-Portal
  bwHPC WIKI
  bwHPC Hauptseite
zurück  Dieses Programm auf allen Clustern suchen
Kategorie/Name/Versionchem/orca/4.2.0
ClusterbwUniCluster      (für ALLE verfügbar!)
Nachricht/Übersicht
Konflikte
Preregs =Abhängigkeiten
conflict chem/orca
Lizenzhttps://orcaforum.kofo.mpg.de/app.php/dlext/?cat=6
URL https://orcaforum.kofo.mpg.de
What-ischem/orca/4.2.0 : Quantum chemistry package ORCA (command '/opt/bwhpc/common/chem/orca/4.2.0/orca')
Hilfe-Text
This module provides the quantum chemistry package ORCA version 4.2.0
via command '/opt/bwhpc/common/chem/orca/4.2.0/orca' 

IMPORTANT: Read this license agreement: 'https://orcaforum.cec.mpg.de/license.html'
if you plan to use it for research.

ORCA is a generalâ€purpose quantum chemistry program package that features virtually
all modern electronic structure methods (density functional theory, manyâ€body 
perturbation and coupled cluster theories, and multireference and semiempirical methods).
It is designed with the aim of generality, extendibility, efficiency, and user 
friendliness. Its main field of application is larger molecules, transition metal
complexes, and their spectroscopic properties. ORCA uses standard Gaussian basis
functions and is fully parallelized. The article provides an overview of its current
possibilities and documents its efficiency. © 2011 John Wiley & Sons, Ltd.

MPI-PARALLELISM

To calculate an MPI job on 8 cores, simply add '! PAL8' at the beginning 
of your orca input file.

If you are going to use up to 8 MPI processes, then in your input file, 
you can use the PALn header option to specify the number of MPI 
processes to use. In the example.inp file below, 8 MPI processes 
are requested.

! BP86 def2-SVP Opt PAL8  
or - inside a *.moab submit script -
! BP86 def2-SVP Opt PAL${MOAB_PROCCOUNT}

Only some orca modules are parallelized. They end with the suffix '_mpi'.

Even for these, orca scales well only up to 8 cores (on 1 node) in 
most cases! See documentation and bwhpc-examples/*.moab 
($ORCA_EXA_DIR) file for details.


DOCUMENTATION

*  Orca Forum (main point to get informations)
   https://orcaforum.kofo.mpg.de
 
*  Orca Manuals 
   https://orcaforum.kofo.mpg.de/app.php/dlext/?cat=1

*  Orca Jump-Start Guid
   https://orcaforum.kofo.mpg.de/app.php/dlext/?view=detail&df_id=22
 
*  Local
   /opt/bwhpc/common/chem/orca/4.2.0/manual

*  bwHPC Examples for Justus Cluster
   /opt/bwhpc/common/chem/orca/4.2.0/bwhpc-examples

CITING

Neese, F. “The ORCA program system †Wiley interdisciplinary 
Reviews - Computational Molecular Science, 2012, Vol 2., Issue 1, Pages 73–78

In case of problems, please contact 'bwhpc (at) uni-konstanz.de'.

This module is not mandatory.
SupportbwHPC Support-Portal
Installationsdatum24.10.2019
Löschdatum
Best-Practice-Wikihttps://www.bwhpc-c5.de/wiki/index.php/Orca