Skip to content

ORCA

Description

ORCA is a flexible, efficient and easy-to-use general purpose for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single and multi-reference correlated ab initio methods. It can also treat environmental and relativistic effects.

For more information, visit the ORCA Official Website.
Useful links: ORCA Input Library.

Access

Access to ORCA is granted to users who are able to show that they have registered with the ORCA Forum to download ORCA:

  1. Register for an ORCA Forum account.
  2. Provide the requested information and Acceptance of an End User License Agreement (EULA) containing the terms of registration.
  3. Once registration is complete, several download links will be available. You do not need to download anything to run ORCA on the HPRC systems.
  4. An ORCA registration verification email will be sent to the email address that you used to register for the ORCA Forum.
  5. Send a copy of the ORCA registration verification email (NOT the eula) to help@hprc.tamu.edu as proof of ORCA registration.

Once we received your proof of ORCA registration, you will be given access to the HPRC ORCA installs and notified.

License Information

By using ORCA, you are agreeing to the terms and conditions that you agreed to when registering with ORCA.

End User License Agreement (EULA) for the ORCA software

Usage

Loading the ORCA modules

List the versions of ORCA installed:

mla ORCA

List the required module dependencies for ORCA/version:

ml spiderORCA/version

Finally, load ORCA/version with the dependencies listed first to setup your environment to run ORCA:

mldependencies ORCA/version

Running ORCA in Parallel

ORCA takes care of communicating with the OpenMPI interface on its own when needed. ORCA should NOT be started with mpirun: e.g. mpirun -np 16 orca etc., like many MPI programs. Use the !PalX keyword in the inputfile to tell ORCA to start multiple processes. Everything from PAL2 to PAL8 and Pal16 is recognized. For example, to start a 4-process job, the input file might look like this:

! B3LYP def2-SVP Opt PAL4

or using block input to start a 48-core (48-processor) job:

! B3LYP def2-SVP Opt  
%pal  
nprocs 48
end

ORCA Example Job file

A single node multicore (48-core) example: (Updated Dec. 8, 2025)

#!/bin/bash  
##NECESSARY JOB SPECIFICATIONS  
#SBATCH --job-name=orcaJob            # Sets the job name to orcaJob  
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr  
#SBATCH --ntasks=48                   # Request 48 tasks
#SBATCH --ntasks-per-node=48          # Requests 48 cores per node 
#SBATCH --mem=360G                    # Request 360GB per node
#SBATCH --error=orcaJob.job.e%J       # Sends stderr to orcaJob.job.e[jobID]  
#SBATCH --output=orcaJob.job.o%J      # Sends stdout to orcaJob.job.o[jobID]

# setup your environment to run ORCA 
ml purge                              # purge all module  
ml GCC/13.2.0 OpenMPI/4.1.8           # load dependencies
ml ORCA/6.1.1-avx2                    # load the module for ORCA

# run ORCA
$EBROOTORCA/orca orcaJob.inp  >  orcaJob.out

exit                                  # exit when the job is done

A two-node multicore (48-core) example: (Updated Dec. 8, 2025)

#!/bin/bash  
##NECESSARY JOB SPECIFICATIONS  
#SBATCH --job-name=orcaJob            # Sets the job name to orcaJob  
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr  
#SBATCH --ntasks=48                   # Request 48 tasks
#SBATCH --ntasks-per-node=24          # Requests 24 cores per node 
#SBATCH --nodes=2
#SBATCH --mem=360G                    # Request 360GB per node
#SBATCH --error=orcaJob.job.e%J       # Sends stderr to orcaJob.job.e[jobID]  
#SBATCH --output=orcaJob.job.o%J      # Sends stdout to orcaJob.job.o[jobID]

# setup your environment to run ORCA 
ml purge                              # purge all module  
ml GCC/13.2.0 OpenMPI/4.1.8           # load dependencies
ml ORCA/6.1.1-avx2                    # load the module for ORCA

# run ORCA
$EBROOTORCA/orca orcaJob.inp  >  orcaJob.out

exit                                  # exit when the job is done

Submitting Job

To submit the job to the queue, use the following command:

sbatchjobscript

For further instructions on how to create and submit a batch job, please see the appropriate kb page for each respective cluster:

Using xTB with ORCA

ORCA 6.0.1 distributions contain the official xtb 6.7.1 binaries for the semiempirical quantum mechanical methods GFNn-xTB. There’s no need to load the xtb module separately.

Performance

ORCA scales best when using up to 16 cores. For the most efficient use of your SUs, we recommend running ORCA with no more than 16 cores.

Below is an example of ORCA performace:

  • molecule: 168 atoms
  • method: B3LYP/Def2-TZVP
  • energy gradient
  • single node