Hprc banner tamu.png

Difference between revisions of "Terra:Batch Job Examples"

From TAMU HPRC
Jump to: navigation, search
(Job File Examples)
(Job File Examples)
Line 77: Line 77:
 
  #SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
 
  #SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
 
  #SBATCH --output=Example4Out.%j      #Send stdout/err to "Example4Out.[jobID]"
 
  #SBATCH --output=Example4Out.%j      #Send stdout/err to "Example4Out.[jobID]"
  #SBATCH --gres=gpu:1                #Request 1 GPU per node ''can be 1 or 2''
+
  #SBATCH --gres=gpu:1                #Request 1 GPU per node '''can be 1 or 2'''
 
  #SBATCH --partition=gpu              #Request the GPU partition/queue
 
  #SBATCH --partition=gpu              #Request the GPU partition/queue
 
   
 
   

Revision as of 13:09, 10 February 2017

Job File Examples

Several examples of Slurm job files for Terra are listed below. For translating Ada (LSF) job files, the Batch Job Translation Guide provides some reference.

Example Job 1: A serial job (single core, single node)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE                #Do not propagate environment
#SBATCH --get-user-env=L             #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample1       #Set the job name to "JobExample1"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example1Out.%j      #Send stdout/err to "Example1Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
#SBATCH --account=123456             #Set billing account to 123456
#SBATCH --mail-type=ALL              #Send email on all job events
#SBATCH --mail-user=email_address    #Send all emails to email_address

#First Executable Line

Example Job 2: A multi core, single node job

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE                #Do not propagate environment
#SBATCH --get-user-env=L             #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample2       #Set the job name to "JobExample2"
#SBATCH --time=6:30:00               #Set the wall clock limit to 6hr and 30min
#SBATCH --nodes=1                    #Request 1 node
#SBATCH --ntasks-per-node=8          #Request 8 tasks/cores per node
#SBATCH --mem=8G                     #Request 8GB per node 
#SBATCH --output=Example2Out.%j      #Send stdout/err to "Example2Out.[jobID]" 

##OPTIONAL JOB SPECIFICATIONS
#SBATCH --account=123456             #Set billing account to 123456
#SBATCH --mail-type=ALL              #Send email on all job events
#SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

Example Job 3: A multi core, multi node job

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE                #Do not propagate environment
#SBATCH --get-user-env=L             #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample3       #Set the job name to "JobExample3"
#SBATCH --time=1-12:00:00            #Set the wall clock limit to 1 Day and 12hr
#SBATCH --ntasks=8                   #Request 8 tasks
#SBATCH --ntasks-per-node=2          #Request 2 tasks/cores per node
#SBATCH --mem=4096M                  #Request 4096MB (4GB) per node 
#SBATCH --output=Example3Out.%j      #Send stdout/err to "Example3Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
#SBATCH --account=123456             #Set billing account to 123456
#SBATCH --mail-type=ALL              #Send email on all job events
#SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

Example Job 4: A serial GPU job

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE                #Do not propagate environment
#SBATCH --get-user-env=L             #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample4       #Set the job name to "JobExample4"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example4Out.%j      #Send stdout/err to "Example4Out.[jobID]"
#SBATCH --gres=gpu:1                 #Request 1 GPU per node can be 1 or 2
#SBATCH --partition=gpu              #Request the GPU partition/queue

##OPTIONAL JOB SPECIFICATIONS
#SBATCH --account=123456             #Set billing account to 123456
#SBATCH --mail-type=ALL              #Send email on all job events
#SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

See more specialized job files (if available) at the HPRC Software page