Hprc banner tamu.png

Difference between revisions of "SW:Gaussian"

From TAMU HPRC
Jump to: navigation, search
(Running G16 on Terra and Ada)
(Terra Example Job file)
 
(11 intermediate revisions by 2 users not shown)
Line 18: Line 18:
  
 
Those who have been specifically approved for access will be able to run GaussView 6 as detailed in the sections below. <br />
 
Those who have been specifically approved for access will be able to run GaussView 6 as detailed in the sections below. <br />
==Using GaussView 6 on Grace==
+
==Using GaussView 6==
To setup your environment to run GaussView 6 on grace, you will need to load a Gaussian module.
+
To setup your environment to run GaussView 6, you will need to load a Gaussian module.
  
Find the versions of Gaussian installed on Grace:
+
Find the versions of Gaussian installed:
 
  mla gaussian
 
  mla gaussian
  
Load the module
+
Load the module.  Replace ''GaussianVersion'' with the version of Gaussian you wish to use.
 
  ml ''GaussianVersion''
 
  ml ''GaussianVersion''
  
Line 34: Line 34:
  
 
MesaGL will cause slower rendering, therefore only use it if needed.
 
MesaGL will cause slower rendering, therefore only use it if needed.
 
==Using GaussView 6 on Terra and Ada==
 
Those who have been specifically approved for access will be able to run GaussView 6 as detailed in the sections below. <br />
 
Both Ada and Terra have GaussView 6 modules available. To find available versions, use the following command:
 
[NetID]@ada/terra ~]$ ml spider gv
 
To load a particular gv module:
 
[NetID]@ada/terra ~]$ ml gv/gv-6.0.16-g16_b01
 
To launch GaussView:
 
[NetID]@ada/terra ~]$ gv
 
Note: if OpenGL does not work properly, please use MesaGL instead. ('''export USE_MESAGL=1''') <br />
 
  
 
==Running G16==
 
==Running G16==
Line 51: Line 41:
 
Gaussian can only run parallel with shared memory, therefore you cannot use more than 1 node and are limited to a maximum of 48 core on grace, 28 core on Terra and 20 core on Ada.  
 
Gaussian can only run parallel with shared memory, therefore you cannot use more than 1 node and are limited to a maximum of 48 core on grace, 28 core on Terra and 20 core on Ada.  
  
The LMS offers a script to create and submit Gaussian job files to the queue:
+
Below are example job files for Gaussian 16.  You can create your own job files to fit your needs or you may use a script specifically setup to create and submit Gaussian 16 job files: '''qprep'''<br />
 
 
Grace: /sw/restricted/lms/bin/qprep
 
Terra: /sw/group/lms/bin/qprep
 
Ada: /sw/lms/bin/qprep
 
 
 
Below are example job files for Gaussian 16 (Terra & Ada).  You can create your own job files to fit your needs or you may use a script specifically setup to create and submit Gaussian 16 job files: '''qprep'''<br />
 
 
Help for qprep can be obtained by running qprep -h
 
Help for qprep can be obtained by running qprep -h
 
Terra: '''/sw/group/lms/bin/qprep'''<br />
 
Ada: '''/sw/lms/bin/qprep'''
 
  
 
===Terra Example Job file===
 
===Terra Example Job file===
 
A multicore (28 core) example: (Last updated Sept. 14, 2020)
 
A multicore (28 core) example: (Last updated Sept. 14, 2020)
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
#SBATCH --export=NONE
 
#SBATCH --get-user-env=L
 
 
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
 
  #SBATCH --job-name=GaussianJob        # Sets the job name to GaussianJob
 
  #SBATCH --job-name=GaussianJob        # Sets the job name to GaussianJob
Line 81: Line 58:
 
  export g16root=/sw/group/lms/sw/g16_B01  # set g16root variable
 
  export g16root=/sw/group/lms/sw/g16_B01  # set g16root variable
 
  . $g16root/g16/bsd/g16.profile        # source g16.profile to setup environment for g16<br />
 
  . $g16root/g16/bsd/g16.profile        # source g16.profile to setup environment for g16<br />
  echo -P- $SLURM_NPROCS > Default.Route # set the number of core for gaussian to use ($SLURM_NPROCS variable is set to the number of core requested by #SBATCH --ntasks)
+
  echo -P- $SLURM_NPROCS > Default.Route # set the number of cores for gaussian to use ($SLURM_NPROCS variable is set to the number of core requested by #SBATCH --ntasks)
 
  echo -M- 50GB >> Default.Route        # set the memory for gaussian to use (50GB)<br />
 
  echo -M- 50GB >> Default.Route        # set the memory for gaussian to use (50GB)<br />
 
  module purge                          # purge all module<br />
 
  module purge                          # purge all module<br />
Line 91: Line 68:
 
[ NetID@terra1 ~]$ '''sbatch ''jobscript'''''
 
[ NetID@terra1 ~]$ '''sbatch ''jobscript'''''
  
===Ada Example Job file===
+
===Grace Example Job file===
A multicore (20 core) example: (Last updated Sept. 14, 2020)
+
A multicore (48 core) example: (Last updated July 21, 2021)
  #BSUB -J GaussianJob                   # sets the job name to GaussianJob.
+
  #!/bin/bash
  #BSUB -L /bin/bash                    # uses the bash login shell to initialize the job's execution environment.
+
##NECESSARY JOB SPECIFICATIONS
#BSUB -W 2:00                         # sets to 2 hours the job's runtime wall-clock limit.
+
#SBATCH --job-name=GaussianJob         # Sets the job name to GaussianJob
  #BSUB -n 20                            # assigns 20 cores for execution.
+
  #SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr
  #BSUB -R "span[ptile=20]"              # assigns 20 cores per node.
+
  #SBATCH --ntasks=48                    # Requests 48 cores
  #BSUB -R "rusage[mem=2700]"            # reserves 2700MB per process/CPU for the job (2700MB * 20 Core = 54GB per node)
+
  #SBATCH --ntasks-per-node=48          # Requests 48 cores per node (always specify 1 node for gaussian jobs)
  #BSUB -M 2700                 # sets to 2700MB (2700MB) the per process enforceable memory limit.
+
  #SBATCH --mem=360G                      # Requests 360GB of memory per node
#BSUB -o GaussianJob.job.o%J           # directs the jobs standard output to GaussianJob.job.o[jobid]
+
  #SBATCH --error=GaussianJob.job.e%J   # Sends stderr to GaussianJob.job.e[jobID]
  #BSUB -e GaussianJob.job.e%J           # directs the jobs standard error to GaussianJob.job.e[jobid]<br />
+
  #SBATCH --output=GaussianJob.job.o%J   # Sends stdout to GaussianJob.job.o[jobID]<br \>
 
  cd $TMPDIR                            # change to the local disk temporary directory<br />
 
  cd $TMPDIR                            # change to the local disk temporary directory<br />
  export g16root=/sw/lms/g16_B01   #set g16root variable
+
  export g16root=/sw/restricted/lms/sw/Gaussian/g16_C01   # set g16root variable
  . $16root/g16/bsd/g16.profile        #source g16.profile to setup environment for g16<br />
+
  . $g16root/g16/bsd/g16.profile        # source g16.profile to setup environment for g16<br />
  echo -P- $LSB_DJOB_NUMPROC > Default.Route # set the number of core for gaussian to use ($LSB_DJOB_NUMPROC variable is set to the number of core specified by #BSUB -n)
+
  echo -P- $SLURM_NPROCS > Default.Route # set the number of cores for gaussian to use ($SLURM_NPROCS variable is set to the number of core requested by #SBATCH --ntasks)
  echo -M- 50GB >> Default.Route        # set the memory for gaussian to use (50GB)<br />
+
  echo -M- 360GB >> Default.Route        # set the memory for gaussian to use (360GB)<br />
 
  module purge                          # purge all module<br />
 
  module purge                          # purge all module<br />
  g16  < $LS_SUBCWD/GaussianJob.com  > $LS_SUBCWD/GaussianJob.log  # run gaussian<br />
+
  g16  < $SLURM_SUBMIT_DIR/GaussianJob.com  > $SLURM_SUBMIT_DIR/GaussianJob.log  # run gaussian<br />
 
  exit                                  #exit when the job is done
 
  exit                                  #exit when the job is done
  
 
To submit the job to the queue, use the following command:
 
To submit the job to the queue, use the following command:
 
   
 
   
[ NetID@ada ~]$ '''bsub < ''jobscript'''''
+
[ NetID@grace2 ~]$ '''sbatch ''jobscript'''''
 
 
where jobscript is the name of the job file.
 
  
 
==Frequently Asked Questions==
 
==Frequently Asked Questions==

Latest revision as of 17:16, 20 December 2021


Gaussian & GaussView 6

Gaussian is restricted software.

Usage of this software is restricted to Laboratory for Molecular Simulation (LMS) subscribers. If you believe you are eligible to use Gaussian on our clusters, please email the HPRC Help Desk with the request and justification.

Description

Gaussian is a software package used for calculating molecular electronic structure and properties. Gaussian is used by chemists, chemical engineers, biochemists and physicists for research in established and emerging areas of chemical interest. This package includes a wide range of ab initio and semi-empirical methods for energy, gradient, frequency and property calculations.

GaussView 6 is the latest version of a graphical interface that is native to Gaussian. It enables users to create Gaussian input files from a graphical interface and visualize the calculation results (plot molecular orbitals and other properties, animate vibrations, visualize predicted spectra, etc.).
Homepage: Gaussian.com
Manual: Gaussian 16 manual

Those who have been specifically approved for access will be able to run GaussView 6 as detailed in the sections below.

Using GaussView 6

To setup your environment to run GaussView 6, you will need to load a Gaussian module.

Find the versions of Gaussian installed:

mla gaussian

Load the module. Replace GaussianVersion with the version of Gaussian you wish to use.

ml GaussianVersion

You should now be able to run GaussView 6 using the command:

gv

Note: If OpenGL does not work properly, please use MesaGL instead:

export USE_MESAGL=1

MesaGL will cause slower rendering, therefore only use it if needed.

Running G16

Those who have been specifically approved for access will be able to run Gaussian as detailed in the sections below.

Gaussian can only run parallel with shared memory, therefore you cannot use more than 1 node and are limited to a maximum of 48 core on grace, 28 core on Terra and 20 core on Ada.

Below are example job files for Gaussian 16. You can create your own job files to fit your needs or you may use a script specifically setup to create and submit Gaussian 16 job files: qprep
Help for qprep can be obtained by running qprep -h

Terra Example Job file

A multicore (28 core) example: (Last updated Sept. 14, 2020)

#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=GaussianJob         # Sets the job name to GaussianJob
#SBATCH --time=2:00:00                 # Sets the runtime limit to 2 hr
#SBATCH --ntasks=28                    # Requests 28 cores
#SBATCH --ntasks-per-node=28           # Requests 28 cores per node (1 node)
#SBATCH --mem=56G                      # Requests 56GB of memory per node
#SBATCH --error=GaussianJob.job.e%J    # Sends stderr to GaussianJob.job.e[jobID]
#SBATCH --output=GaussianJob.job.o%J   # Sends stdout to GaussianJob.job.o[jobID]
cd $TMPDIR # change to the local disk temporary directory
export g16root=/sw/group/lms/sw/g16_B01 # set g16root variable . $g16root/g16/bsd/g16.profile # source g16.profile to setup environment for g16
echo -P- $SLURM_NPROCS > Default.Route # set the number of cores for gaussian to use ($SLURM_NPROCS variable is set to the number of core requested by #SBATCH --ntasks) echo -M- 50GB >> Default.Route # set the memory for gaussian to use (50GB)
module purge # purge all module
g16 < $SLURM_SUBMIT_DIR/GaussianJob.com > $SLURM_SUBMIT_DIR/GaussianJob.log # run gaussian
exit #exit when the job is done

To submit the job to the queue, use the following command:

[ NetID@terra1 ~]$ sbatch jobscript

Grace Example Job file

A multicore (48 core) example: (Last updated July 21, 2021)

#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=GaussianJob         # Sets the job name to GaussianJob
#SBATCH --time=2:00:00                 # Sets the runtime limit to 2 hr
#SBATCH --ntasks=48                    # Requests 48 cores
#SBATCH --ntasks-per-node=48           # Requests 48 cores per node (always specify 1 node for gaussian jobs)
#SBATCH --mem=360G                      # Requests 360GB of memory per node
#SBATCH --error=GaussianJob.job.e%J    # Sends stderr to GaussianJob.job.e[jobID]
#SBATCH --output=GaussianJob.job.o%J   # Sends stdout to GaussianJob.job.o[jobID]
cd $TMPDIR # change to the local disk temporary directory
export g16root=/sw/restricted/lms/sw/Gaussian/g16_C01 # set g16root variable . $g16root/g16/bsd/g16.profile # source g16.profile to setup environment for g16
echo -P- $SLURM_NPROCS > Default.Route # set the number of cores for gaussian to use ($SLURM_NPROCS variable is set to the number of core requested by #SBATCH --ntasks) echo -M- 360GB >> Default.Route # set the memory for gaussian to use (360GB)
module purge # purge all module
g16 < $SLURM_SUBMIT_DIR/GaussianJob.com > $SLURM_SUBMIT_DIR/GaussianJob.log # run gaussian
exit #exit when the job is done

To submit the job to the queue, use the following command:

[ NetID@grace2 ~]$ sbatch jobscript

Frequently Asked Questions