Hprc banner tamu.png

SW:Gaussian

From TAMU HPRC
Jump to: navigation, search

Gaussian

Gaussian is restricted software.

Usage of this software is restricted only to those determined eligible by the license manager: Lisa M. Perez, manager, Laboratory for Molecular Simulation. If you believe you are eligible to use Gaussian on our clusters, please email the HPRC Help Desk with the request and justification.

Description

Gaussian is a software package used for calculating molecular electronic structure and properties. Gaussian is used by chemists, chemical engineers, biochemists and physicists for research in established and emerging areas of chemical interest. This package includes a wide range of ab initio and semi-empirical methods for energy, gradient, frequency and property calculations.

Homepage: Gaussian.com
Manual: Gaussian 09 manual

Running G09 on Terra, Ada, and Curie

Those who have been specifically approved for access will be able to run Gaussian as detailed in the sections below.

Gaussian can only run parallel with shared memory, therefore you cannot use more than 1 node and are limited to a maximum of 28 core on Terra, 20 core on Ada, and 16 core on curie.

Below are example job files for Gaussian 09 (Terra, Ada, and Curie). You can create your own job files to fit your needs or you may use a script specifically setup to create and submit Gaussian 09 job files: qprep
Help for qprep can be obtained by running qprep -h

Terra: /sw/group/lms/bin/qprep
Ada and Curie: /software/lms/bin/qprep

Terra Example Job file

A multicore (28 core) example: (Last updated Oct. 18, 2017)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=GaussianJob        # Sets the job name to GaussianJob
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr
#SBATCH --ntasks=28                   # Requests 28 cores
#SBATCH --ntasks-per-node=28          # Requests 28 cores per node (1 node)
#SBATCH --mem=56G                     # Requests 56GB of memory per node
#SBATCH --error=GaussianJob.job.e%J  # Sends stderr to GaussianJob.job.e[jobID]
#SBATCH --output=GaussianJob.job.o%J  # Sends stdout to GaussianJob.job.o[jobID]
cd $TMPDIR # change to the local disk temporary directory
export g09root=/software/lms/g09_D01 #set g09root variable . $g09root/g09/bsd/g09.profile #source g09.profile to setup environment for g09
echo -P- 28 > Default.Route # set the number of core for gaussian to use (20) echo -M- 50GB >> Default.Route # set the memory for gaussian to use (50GB)
module purge # purge all module
g09 < $SLURM_SUBMIT_DIR/GaussianJob.com > $SLURM_SUBMIT_DIR/GaussianJob.log # run gaussian
exit #exit when the job is done

To submit the job to the queue, use the following command:

[ NetID@terra1 ~]$ sbatch jobscript

Ada Example Job file

A multicore (20 core) example: (Last updated Oct. 18, 2017)

#BSUB -J GaussianJob                   # sets the job name to GaussianJob.
#BSUB -L /bin/bash                     # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00                          # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 20                            # assigns 20 cores for execution.
#BSUB -R "span[ptile=20]"              # assigns 20 cores per node.
#BSUB -R "rusage[mem=2700]"            # reserves 2700MB per process/CPU for the job (2700MB * 20 Core = 54GB per node) 
#BSUB -M 2700		                # sets to 2700MB (2700MB) the per process enforceable memory limit.
#BSUB -o GaussianJob.job.o%J           # directs the jobs standard output to GaussianJob.job.o[jobid]
#BSUB -e GaussianJob.job.e%J           # directs the jobs standard error to GaussianJob.job.e[jobid]
cd $TMPDIR # change to the local disk temporary directory
export g09root=/software/lms/g09_D01 #set g09root variable . $g09root/g09/bsd/g09.profile #source g09.profile to setup environment for g09
echo -P- 20 > Default.Route # set the number of core for gaussian to use (20) echo -M- 50GB >> Default.Route # set the memory for gaussian to use (50GB)
module purge # purge all module
g09 < $LS_SUBCWD/GaussianJob.com > $LS_SUBCWD/GaussianJob.log # run gaussian
exit #exit when the job is done

To submit the job to the queue, use the following command:

[ NetID@ada1 ~]$ bsub < jobscript

where jobscript is the name of the job file.

Curie Example Job file

A multicore (16 core) example: (Last updated Oct. 18, 2017)

#BSUB -J GaussianJob                   # sets the job name to GaussianJob.
#BSUB -L /bin/bash                     # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00                          # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 16                            # assigns 16 cores for execution.
#BSUB -R "span[ptile=16]"              # assigns 16 cores per node.
#BSUB -R "rusage[mem=3500]"            # reserves 3500MB per process/CPU for the job (3500MB * 16 Core = 56GB per node) 
#BSUB -M 3500		                # sets to 3500MB (3500MB) the per process enforceable memory limit.
#BSUB -o GaussianJob.job.o%J           # directs the jobs standard output to GaussianJob.job.o[jobid]
#BSUB -e GaussianJob.job.e%J           # directs the jobs standard error to GaussianJob.job.e[jobid]
cd $TMPDIR # change to the local disk temporary directory
export g09root=/software/lms/g09_D01 #set g09root variable . $g09root/g09/bsd/g09.profile #source g09.profile to setup environment for g09
echo -P- 16 > Default.Route # set the number of core for gaussian to use (20) echo -M- 50GB >> Default.Route # set the memory for gaussian to use (50GB)
module purge # purge all module
g09 < $LS_SUBCWD/GaussianJob.com > $LS_SUBCWD/GaussianJob.log # run gaussian
exit #exit when the job is done

To submit the job to the queue, use the following command:

[ NetID@curie1 ~]$ bsub < jobscript

where jobscript is the name of the job file.

Frequently Asked Questions