Skip to content

ORCA

Introduction

ORCA is a flexible, efficient, and easy-to-use general-purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.

Installed Versions

For the current list of installed versions, use:

$ ml av orca

Serial Computation With ORCA

You can test a serial computation with this simple input file. Create a file called orca_serial.inp and paste into it the following ORCA commands:

    ! HF SVP
    * xyz 0 1
      C 0 0 0
      O 0 0 1.13
    *

Next, create a Slurm submission file for Karolina cluster (interactive job can be used too):

#!/bin/bash
#SBATCH --job-name=ORCA_SERIAL
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=128
#SBATCH --partition=qexp
#SBATCH --account=OPEN-0-0

ml ORCA/5.0.1-OpenMPI-4.1.1
srun orca orca_serial.inp

Submit the job to the queue. After the job ends, you can find an output log in your working directory:

sbatch submit_serial.slurm
1417552

$ ll ORCA_SERIAL.*
-rw------- 1 user user     0 Aug 21 12:24 ORCA_SERIAL.e1417552
-rw------- 1 user user 20715 Aug 21 12:25 ORCA_SERIAL.o1417552

$ cat ORCA_SERIAL.o1417552

                                 *****************
                                 * O   R   C   A *
                                 *****************

           --- An Ab Initio, DFT and Semiempirical electronic structure package ---

                  #######################################################
                  #                        -***-                        #
                  #  Department of molecular theory and spectroscopy    #
                  #              Directorship: Frank Neese              #
                  # Max Planck Institute for Chemical Energy Conversion #
                  #                  D-45470 Muelheim/Ruhr              #
                  #                       Germany                       #
                  #                                                     #
                  #                  All rights reserved                #
                  #                        -***-                        #
                  #######################################################


                         Program Version 5.0.1 - RELEASE -

...

                             ****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 1 seconds 47 msec

Running ORCA in Parallel

Your serial computation can be easily converted to parallel. Simply specify the number of parallel processes by the %pal directive. In this example, 4 nodes, 128 cores each are used.

Warning

Do not use the ! PAL directive as only PAL2 to PAL8 is recognized.

    ! HF SVP
    %pal
      nprocs 512 # 4 nodes, 128 cores each
    end
    * xyz 0 1
      C 0 0 0
      O 0 0 1.13
    *

You also need to edit the previously used Slurm submission file. You have to specify number of nodes, cores, and MPI-processes to run:

#!/bin/bash
#SBATCH --job-name=ORCA_PARALLEL
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=128
#SBATCH --partition=qexp
#SBATCH --account=OPEN-0-0

ml ORCA/5.0.1-OpenMPI-4.1.1
srun orca orca_parallel.inp > output.out

Note

When running ORCA in parallel, ORCA should NOT be started with mpirun (e.g. mpirun -np 4 orca, etc.) like many MPI programs and has to be called with a full pathname.

Submit this job to the queue and see the output file.

$ srun submit_parallel.slurm
1417598

$ ll ORCA_PARALLEL.*
-rw-------  1 user user     0 Aug 21 13:12 ORCA_PARALLEL.e1417598
-rw-------  1 user user 23561 Aug 21 13:13 ORCA_PARALLEL.o1417598

$ cat ORCA_PARALLEL.o1417598

                                 *****************
                                 * O   R   C   A *
                                 *****************

           --- An Ab Initio, DFT and Semiempirical electronic structure package ---

                  #######################################################
                  #                        -***-                        #
                  #  Department of molecular theory and spectroscopy    #
                  #              Directorship: Frank Neese              #
                  # Max Planck Institute for Chemical Energy Conversion #
                  #                  D-45470 Muelheim/Ruhr              #
                  #                       Germany                       #
                  #                                                     #
                  #                  All rights reserved                #
                  #                        -***-                        #
                  #######################################################


                         Program Version 5.0.1 - RELEASE -
...

           ************************************************************
           *        Program running with 64 parallel MPI-processes    *
           *              working on a common directory               *
           ************************************************************

...
                             ****ORCA TERMINATED NORMALLY****
TOTAL RUN TIME: 0 days 0 hours 0 minutes 11 seconds 859 msec

You can see, that the program was running with 512 parallel MPI-processes. In version 5.0.1, only the following modules are parallelized:

  • ANOINT
  • CASSCF / NEVPT2
  • CIPSI
  • CIS/TDDFT
  • CPSCF
  • EPRNMR
  • GTOINT
  • MDCI (Canonical-, PNO-, DLPNO-Methods)
  • MP2 and RI-MP2 (including Gradient and Hessian)
  • MRCI
  • PC
  • ROCIS
  • SCF
  • SCFGRAD
  • SCFHESS
  • SOC
  • Numerical Gradients and Frequencies

Example Submission Script

The following script contains all of the necessary instructions to run an ORCA job, including copying of the files to and from /scratch to utilize the InfiniBand network:

#!/bin/bash
#SBATCH --account=OPEN-00-00
#SBATCH --job-name=example-CO
#SBATCH --partition=qexp
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --time=00:05:00

ml purge
ml ORCA/5.0.1-OpenMPI-4.1.1

echo $SLURM_O_WORKDIR
cd $SLURM_O_WORKDIR

# create /scratch dir
b=$(basename $SLURM_O_WORKDIR)
SCRDIR=/scratch/project/OPEN-00-00/$USER/${b}_${SLURM_JOBID}/
echo $SCRDIR
mkdir -p $SCRDIR
cd $SCRDIR || exit

# get number of cores used for our job
ncpus=$(sacct -j 727825 --format=AllocCPUS --noheader | head -1)


### create ORCA input file
cat > ${SLURM_JOBNAME}.inp <<EOF
! HF def2-TZVP
%pal
  nprocs $ncpus
end
* xyz 0 1
C 0.0 0.0
 0.0
O 0.0 0.0
 1.13
*
EOF
###

# copy input files to /scratch
cp -r $SLURM_O_WORKDIR/* .

# run calculations
$(which orca) ${SLURM_JOBNAME}.inp > $SLURM_O_WORKDIR/${SLURM_JOBNAME}.out

# copy output files to home, delete the rest
cp * $SLURM_O_WORKDIR/ && cd $SLURM_O_WORKDIR
rm -rf $SCRDIR
exit

Register as User

You are encouraged to register as a user of ORCA here in order to take advantage of updates, announcements, and the users forum.

Documentation

A comprehensive manual is available online for registered users.