Skip to content

ORCA

Introduction

ORCA is a flexible, efficient, and easy-to-use general-purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.

Installed Versions

For the current list of installed versions, use:

$ ml av orca

Serial Computation With ORCA

You can test a serial computation with this simple input file. Create a file called orca_serial.inp and paste into it the following ORCA commands:

    ! HF SVP
    * xyz 0 1
      C 0 0 0
      O 0 0 1.13
    *

Next, create a Slurm submission file for Karolina cluster (interactive job can be used too):

#!/bin/bash
#SBATCH --job-name=ORCA_SERIAL
#SBATCH --nodes=1
#SBATCH --partition=qcpu_exp
#SBATCH --time=1:00:00
#SBATCH --account=OPEN-0-0

ml ORCA/6.0.0-gompi-2023a-avx2
orca orca_serial.inp

Submit the job to the queue. After the job ends, you can find an output log in your working directory:

sbatch submit_serial.slurm
1417552

$ ll ORCA_SERIAL.*
-rw------- 1 user user     0 Aug 21 12:24 ORCA_SERIAL.e1417552
-rw------- 1 user user 20715 Aug 21 12:25 ORCA_SERIAL.o1417552

$ cat ORCA_SERIAL.o1417552

                                 *****************
                                 * O   R   C   A *
                                 *****************

                #########################################################
                #                        -***-                          #
                #          Department of theory and spectroscopy        #
                #                                                       #
                #                      Frank Neese                      #
                #                                                       #
                #     Directorship, Architecture, Infrastructure        #
                #                    SHARK, DRIVERS                     #
                #        Core code/Algorithms in most modules           #
                #                                                       #
                #        Max Planck Institute fuer Kohlenforschung      #
                #                Kaiser Wilhelm Platz 1                 #
                #                 D-45470 Muelheim/Ruhr                 #
                #                      Germany                          #
                #                                                       #
                #                  All rights reserved                  #
                #                        -***-                          #
                #########################################################


                         Program Version 6.0.0  -   RELEASE  -

...

                             ****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 0 seconds 980 msec

Running ORCA in Parallel

Your serial computation can be easily converted to parallel. Simply specify the number of parallel processes by the %pal directive. In this example, 1 node, 16 cores are used.

Warning

Do not use the ! PAL directive as only PAL2 to PAL8 is recognized.

    ! HF SVP
    %pal
      nprocs 16
    end
    * xyz 0 1
      C 0 0 0
      O 0 0 1.13
    *

You also need to edit the previously used Slurm submission file. You have to specify number of nodes, cores, and MPI-processes to run:

#!/bin/bash
#SBATCH --job-name=ORCA_PARALLEL
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --partition=qcpu_exp
#SBATCH --account=OPEN-0-0
#SBATCH --time=1:00:00

ml ORCA/6.0.0-gompi-2023a-avx2
$(which orca) orca_parallel.inp > output.out

Note

When running ORCA in parallel, ORCA should NOT be started with mpirun (e.g. mpirun -np 4 orca, etc.) like many MPI programs and has to be called with a full pathname.

Submit this job to the queue and see the output file.

$ sbatch submit_parallel.slurm
Submitted batch job 2127305

$ cat output.out

                                 *****************
                                 * O   R   C   A *
                                 *****************


                #########################################################
                #                        -***-                          #
                #          Department of theory and spectroscopy        #
                #                                                       #
                #                      Frank Neese                      #
                #                                                       #
                #     Directorship, Architecture, Infrastructure        #
                #                    SHARK, DRIVERS                     #
                #        Core code/Algorithms in most modules           #
                #                                                       #
                #        Max Planck Institute fuer Kohlenforschung      #
                #                Kaiser Wilhelm Platz 1                 #
                #                 D-45470 Muelheim/Ruhr                 #
                #                      Germany                          #
                #                                                       #
                #                  All rights reserved                  #
                #                        -***-                          #
                #########################################################


                         Program Version 6.0.0  -   RELEASE  -
...

           ************************************************************
           *        Program running with 16 parallel MPI-processes    *
           *              working on a common directory               *
           ************************************************************

...
                             ****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 17 seconds 62 msec

You can see, that the program was running with 16 parallel MPI-processes. In version 6.0.0, only the following modules are parallelized:

  • AUTOCI
  • CASSCF / NEVPT2 / CASSCFRESP
  • CIPSI
  • CIS/TDDFT
  • GRAD (general Gradient program)
  • GUESS
  • LEANSCF (memory conserving SCF solver)
  • MCRPA
  • MDCI (Canonical- and DLPNO-Methods)
  • MM
  • MP2 and RI-MP2 (including Gradients)
  • MRCI
  • PC
  • PLOT
  • PNMR
  • POP
  • PROP
  • PROPINT
  • REL
  • ROCIS
  • SCFGRAD
  • SCFRESP (with SCFHessian)
  • STARTUP
  • VPOT
  • Numerical Gradients, Frequencies, Overtones-and-Combination-Bands
  • VPT2
  • NEB (Nudged Elastic Band

Register as User

You are encouraged to register as a user of ORCA here in order to take advantage of updates, announcements, and the users forum.

Documentation

A comprehensive manual is available online for registered users.