Hprc banner tamu.png

ACES:Batch Job Examples

From TAMU HPRC
Jump to: navigation, search

Job File Examples

Several examples of Slurm job files for ACES are listed below. For translating Ada (LSF) job files, the Batch Job Translation Guide provides some reference.

NOTE: Job examples are NOT lists of commands, but are a template of the contents of a job file. These examples should be pasted into a text editor and submitted as a job to be tested, not entered as commands line by line.

There are several optional parameters available for jobs on ACES. In the examples below, they are commented out/ignored via ##. If you wish to include these values as parameters for your jobs, please change it to a singular # and adjust the parameter value accordingly.

Example Job 1: A serial job (single core, single node)

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=Example_SNSC_CPU  #Set the job name to "JobExample1"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example_SNSC_CPU.%j #Redirect stdout/err to file
#SBATCH --partition=cpu              #Specify partition to submit job to

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 2: A multi core, single node job

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=Example_SNMC_CPU  #Set the job name to Example_SNMC_CPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=1                    #Request 1 node
#SBATCH --ntasks-per-node=64         #Request 64 tasks/cores per node
#SBATCH --mem=248M                   #Request 248G (248GB) per node
#SBATCH --output=Example_SNMC_CPU.%j #Redirect stdout/err to file
#SBATCH --partition=cpu              #Specify partition to submit job to

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address 

#First Executable Line

Example Job 3: A multi core, multi node job

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=Example_MNMC_CPU  #Set the job name to Example_MNMC_CPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=2                    #Request 2 nodes
#SBATCH --ntasks-per-node=64         #Request 64 tasks/cores per node
#SBATCH --mem=248G                   #Request 248G (248GB) per node
#SBATCH --output=Example_MNMC_CPU.%j #Redirect stdout/err to file
#SBATCH --partition=cpu              #Specify partition to submit job to

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 4: A serial GPU job (single node, single core)

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=Example_SNSC_GPU  #Set the job name to Example_SNSC_GPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=248G                   #Request 248G (248GB) per node
#SBATCH --output=Example_SNSC_GPU.%j #Redirect stdout/err to file
#SBATCH --partition=gpu              #Specify partition to submit job to
#SBATCH --gres=gpu:a100:1            #Specify GPU(s) per node, 1 A100 GPU

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456            
#Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 5: A serial GPU job (single node, multiple core)

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=Example_SNMC_GPU  #Set the job name to Example_SNMC_GPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=1                    #Request 1 nodes
#SBATCH --ntasks-per-node=32         #Request 32 tasks/cores per node
#SBATCH --mem=248G                   #Request 248G (248GB) per node
#SBATCH --output=Example_SNMC_GPU.%j #Redirect stdout/err to file
#SBATCH --partition=gpu              #Specify partition to submit job to
#SBATCH --gres=gpu:a100:10           #Specify GPU(s) per node, 10 a100 GPU

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 6: A parallel GPU job (multiple node, multiple core)

#!/bin/bash  

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=Example_MNMC_GPU  #Set the job name to Example_MNMC_GPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=2                    #Request 2 nodes
#SBATCH --ntasks-per-node=32         #Request 32 tasks/cores per node
#SBATCH --mem=248G                   #Request 248G (248GB) per node
#SBATCH --output=Example_MNMC_GPU.%j #Redirect stdout/err to file
#SBATCH --partition=gpu              #Specify partition to submit job to
#SBATCH --gres=gpu:a100:1            #Specify GPU(s) per node, 1 A100 gpu 

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line


See more specialized job files (if available) at the HPRC Software page