ORCA
Description
ORCA is a flexible, efficient and easy-to-use general purpose for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single and multi-reference correlated ab initio methods. It can also treat environmental and relativistic effects.
For more information, visit the ORCA Official Website.
Useful links: ORCA Input Library.
Access
Access to ORCA is granted to users who are able to show that they have registered with the ORCA Forum to download ORCA:
- Register for an ORCA Forum account.
- Provide the requested information and Acceptance of an End User License Agreement (EULA) containing the terms of registration.
- Once registration is complete, several download links will be available. You do not need to download anything to run ORCA on the HPRC systems.
- An ORCA registration verification email will be sent to the email address that you used to register for the ORCA Forum.
- Send a copy of the ORCA registration verification email to help@hprc.tamu.edu as proof of ORCA registration.
Once we received your proof of ORCA registration, you will be given access to the HPRC ORCA installs and notified.
License Information
By using ORCA, you are agreeing to the terms and conditions that you agreed to when registering with ORCA.
End User License Agreement (EULA) for the ORCA software
Loading the ORCA modules
Grace and FASTER Instructions:
List the versions of ORCA installed:
mla ORCA
List the required module dependencies for ORCA/version:
ml spider
ORCA/version
Finally, load ORCA/version with the dependencies listed first to setup your environment to run ORCA:
ml
dependencies
ORCA/version
Running ORCA in Parallel
ORCA takes care of communicating with the OpenMPI interface on its own when needed. ORCA should NOT be started with mpirun: e.g. mpirun -np 16 orca etc., like many MPI programs. Use the !PalX keyword in the inputfile to tell ORCA to start multiple processes. Everything from PAL2 to PAL8 and Pal16 is recognized. For example, to start a 4-process job, the input file might look like this:
! B3LYP def2-SVP Opt PAL4
or using block input to start a 48-core (48-processor) job:
! B3LYP def2-SVP Opt
%pal
nprocs 48
end
Grace Example Job file
A multicore (48-core) example: (Updated Oct. 31, 2024)
#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=orcaJob # Sets the job name to orcaJob
#SBATCH --time=2:00:00 # Sets the runtime limit to 2 hr
#SBATCH --ntasks=48 # Request 48 tasks; in Grace max is 48
#SBATCH --ntasks-per-node=48 # Requests 48 cores per node
#SBATCH --mem=360G # Request 360GB (up to 360GB in Grace) per
#SBATCH --error=orcaJob.job.e%J # Sends stderr to orcaJob.job.e[jobID]
#SBATCH --output=orcaJob.job.o%J # Sends stdout to orcaJob.job.o[jobID]
# setup your environment to run ORCA
ml purge # purge all module
ml GCC/13.2.0 OpenMPI/4.1.6 # load dependencies
ml ORCA/6.0.0-avx2 # load the module for ORCA
# run ORCA
$EBROOTORCA/bin/orca orcaJob.inp > orcaJob.out
exit # exit when the job is done
To submit the job to the queue, use the following command:
[ username@Grace ~]$ sbatch jobscript
FASTER Example Job file
A multicore (64-core) example: (Updated Oct. 31, 2024)
#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=orcaJob # Sets the job name to orcaJob
#SBATCH --time=2:00:00 # Sets the runtime limit to 2 hr
#SBATCH --ntasks=64 # Request 64 tasks; in FASTER max is 64
#SBATCH --ntasks-per-node=64 # Requests 64 cores per node
#SBATCH --mem=240G # Request 240GB (up to 240GB in FASTER) per node
#SBATCH --error=orcaJob.job.e%J # Sends stderr to orcaJob.job.e[jobID]
#SBATCH --output=orcaJob.job.o%J # Sends stdout to orcaJob.job.o[jobID]i
# setup your environment to run ORCA
ml purge # purge all module
ml GCC/13.2.0 OpenMPI/4.1.6 # load dependencies
ml ORCA/6.0.0 # load the module for ORCA
# run ORCA
$EBROOTORCA/orca orcaJob.inp > orcaJob.out
exit # exit when the job is done
To submit the job to the queue, use the following command:
[ username@faster1 ~]$ sbatch jobscript
For further instructions on how to create and submit a batch job, please see the appropriate kb page for each respective cluster:
- Grace: Batch Processing
- FASTER: Batch Processing
- ACES: Batch Processing
Using xTB with ORCA
ORCA 4.2.1 supports the semiempirical quantum mechanical methods GFNn-xTB with an IO-based interface to the xtb binary. The otool_xtb wrapper script has been added to the directory that contains the ORCA binaries.
To use ORCA with xTB, you will need to load both the ORCA and xtb modules.
Grace module load example with xtb:
ml GCC/8.3.0 OpenMPI/3.1.4 ORCA/4.2.1-shared xtb/6.2.3
Using CREST with xTB
Conformer-Rotamer Ensemble Sampling Tool (CREST) is an utility/driver program for the xtb program. CREST Version 2.9 has been installed on grace and is available by loading the xtb module.
Grace module load example with xtb:
ml GCC/8.3.0 OpenMPI/3.1.4 xtb/6.2.3