Hprc banner tamu.png

Difference between revisions of "SW:Gaussian"

From TAMU HPRC
Jump to: navigation, search
(Terra Example)
(Terra Example)
Line 116: Line 116:
 
   
 
   
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
  #SBATCH --job-name=GuassianJob   # Sets the job name to GuassianJob
+
  #SBATCH --job-name=GaussianJob   # Sets the job name to GuassianJob
 
  #SBATCH --time=2:00:00            # Sets the runtime limit to 2 hr
 
  #SBATCH --time=2:00:00            # Sets the runtime limit to 2 hr
 
  #SBATCH --ntasks=56              # Requests 56 cores
 
  #SBATCH --ntasks=56              # Requests 56 cores

Revision as of 13:27, 18 October 2017

Gaussian

Gaussian is restricted software.

Usage of this software is restricted only to those determined eligible by the license manager. If you believe you are eligible to use Gaussian on our clusters, please email the HPRC Help Desk with the request and justification.

Description

Gaussian 16 is the latest version of the Gaussian series of electronic structure programs, used by chemists, chemical engineers, biochemists, physicists and other scientists worldwide - Homepage: Gaussian.com

Access

Guassian is restricted software. Those who have been specifically approved for access will be able to access Guassian as detailed in the section below.

Loading the Module

Is there a Guassian module? Do I load by software path?

License Tokens

To see the number of available tokens, use the License Checker Tool.

Ada Example

A single core example, with user subroutine: (Last updated Dec. 9, 2016)

#BSUB -J GuassianJob         # sets the job name to GuassianJob.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00                # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 1                   # assigns 1 core for execution.
#BSUB -R "span[ptile=1]"     # assigns 1 core per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 1 Core = 5GB per node) 
#BSUB -M 5000		      # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o GuassianOut.%J      # directs the job's standard output to GuassianOut.jobid


## Load the modules
module load intel/2016b
module load ABAQUS/6.14.2-linux-x86_64

## Launch Abaqus with proper parameters 
abaqus memory="5GB" cpus=1 job=jobname input=inputfile.inp user=filename.for


A multicore (10 core) example, with no user subroutine: (Last updated Dec. 9, 2016)

#BSUB -J GuassianJob         # sets the job name to GuassianJob.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00                # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 10                  # assigns 10 cores for execution.
#BSUB -R "span[ptile=10]"    # assigns 10 cores per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 10 Core = 50GB per node) 
#BSUB -M 5000		      # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o GuassianOut.%J      # directs the job's standard output to GuassianOut.jobid


## Load the module
module load ABAQUS/6.14.2-linux-x86_64 

## Launch Abaqus with proper parameters 
abaqus memory="50GB" cpus=10 job=jobname input=inputfile.inp mp_mode=mpi 


To submit the batch job, run: (where jobscript is an ASCII English Text file that looks like one of the above examples)

[ NetID@ada1 ~]$ bsub < jobscript

Terra Example

A single-core example with a user subroutine: (Last updated March 24, 2017)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=GaussianJob    # Sets the job name to GuassianJob
#SBATCH --time=2:00:00            # Sets the runtime limit to 2 hr
#SBATCH --ntasks=1                # Requests 1 core
#SBATCH --ntasks-per-node=1       # Requests 1 core per node (1 node)
#SBATCH --mem=5G                  # Requests 5GB of memory per node
#SBATCH --output=GuassianJob.o%J  # Sends stdout and stderr to GuassianJob.o[jobID]

## Load the module
module purge
module load ABAQUS/2017

## Launch Abaqus with proper parameters 
abaqus memory="5GB" cpus=1 job=jobname input=inputfile.inp user=filename.for

A multi-core (10 core), single-node example, with no user subroutine: (Last updated June 14, 2017)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=GaussianJob    # Sets the job name to GuassianJob
#SBATCH --time=2:00:00            # Sets the runtime limit to 2 hr
#SBATCH --ntasks=10               # Requests 10 cores
#SBATCH --ntasks-per-node=10      # Requests 10 cores per node (1 node)
#SBATCH --mem=50G                 # Requests 50GB of memory per node
#SBATCH --output=GuassianJob.o%J  # Sends stdout and stderr to GuassianJob.o[jobID]

## Load the module
module purge
module load ABAQUS/2017

# setup host list for ABAQUS
slurm_setup_abaqus.sh
 
## Launch Abaqus with proper parameters 
abaqus memory="50GB" cpus=$SLURM_NTASKS job=jobname input=inputfile.inp 

A multi-core (56 core), multi-node example, with no user subroutine: (Last updated June 14, 2017)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=GaussianJob    # Sets the job name to GuassianJob
#SBATCH --time=2:00:00            # Sets the runtime limit to 2 hr
#SBATCH --ntasks=56               # Requests 56 cores
#SBATCH --ntasks-per-node=28      # Requests 28 cores per node (1 node)
#SBATCH --mem=50G                 # Requests 50GB of memory per node
#SBATCH --output=GuassianJob.o%J  # Sends stdout and stderr to GuassianJob.o[jobID]

## Load the module
module purge
module load ABAQUS/2017

# setup host list for ABAQUS
slurm_setup_abaqus.sh
 
## Launch Abaqus with proper parameters 
abaqus memory="50GB" cpus=$SLURM_NTASKS job=jobname input=inputfile.inp mp_mode=mpi 

To submit the batch job run: (where jobscript is an ASCII English Text file that looks like one of the above examples)

[ NetID@terra ~]$ sbatch jobscript

Usage on the VNC Nodes

The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.

VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (20 cores, 64GB or 256GB). There are fewer VNC nodes than comparable compute nodes.

For more information, including instructions, on using software on the VNC nodes, please visit our Ada Remote Visualization page.

Running the Guassian GUI (AGUI/GaussView)

While in a VNC job, use abaqus cae (with vglrun) to start the ABAQUS GUI:

[NetID@gpu ~]$ vglrun abaqus cae

Frequently Asked Questions

Q: What versions of ABAQUS are available and which should I use?
A: You can see which versions of ABAQUS we have available by using the module spider command as shown above. If you do not need to use a specific version of ABAQUS, we generally recommend that you use the newest version that we provide. Remember that it is always recommended to load a specific version of a module instead of the default version. This is because default versions will change which might cause problems in the future. It is always best to know exactly which module you are using.

Q: I am having a hard time setting up batch job to run with Abaqus 6.14 version on Terra. What am I missing?
A: Abaqus 6.14 uses a MPI library (IBM Platform MPI 9.1.2) most likely will not work with Terra's interconnect fabric (Intel Omni-Path). Your options are:

  • Option 1: Ideally, use Abaqus 2016 or 2017 on Terra. Both versions have been modified to support Terra's Omni-Path fabric for multi-node jobs.
  • Option 2: If you need to use Abaqus 6.14 (eg. resuming from a previous 6.14 simulation) on Terra, you will need to limit your job to a single node (at most 28 cores). In this case, do not use the mp_mode=mpi option.
  • Option 3: If you need to use Abaqus 6.14 with multiple nodes and will be resuming a previous 6.14 simulation, you will probably need to use the Ada cluster instead.


Q: How do I open the ABAQUS GUI?
A: Using the ABAQUS GUI (or almost any GUI on our clusters) for anything more than light editing requires the use of a VNC job. For information on how to use a VNC job, please see our Remote Visualization page.

  • Step 1: Start a VNC session with the instructions from the page linked above.
  • Step 2: Load the ABAQUS module as shown above.
  • Step 3: Launch the ABAQUS GUI as shown below:
[NetID@gpu ~]$ vglrun abaqus cae

Note: We always recommend that you do not submit jobs with the ABAQUS GUI unless they take only seconds to run. Using the GUI to submit jobs can cause a lot of issues and often takes much longer than running a job non-interactively.