Hprc banner tamu.png

Difference between revisions of "Terra:Batch Job Examples"

From TAMU HPRC
Jump to: navigation, search
(Job File Examples)
(Job File Examples)
Line 1: Line 1:
 
== Job File Examples ==
 
== Job File Examples ==
  
Several examples of Slurm job files for Terra are listed below. For translating Ada/LSF job files, the [[HPRC:Batch_Translation | Batch Job Translation Guide]] provides some reference.  
+
Several examples of Slurm job files for Terra are listed below. For translating Ada (LSF) job files, the [[HPRC:Batch_Translation | Batch Job Translation Guide]] provides some reference.  
  
 
<span style="font-size:120%;">'''Example Job 1:'''</span> A serial job (single core, single node)<br />
 
<span style="font-size:120%;">'''Example Job 1:'''</span> A serial job (single core, single node)<br />

Revision as of 10:23, 9 January 2017

Job File Examples

Several examples of Slurm job files for Terra are listed below. For translating Ada (LSF) job files, the Batch Job Translation Guide provides some reference.

Example Job 1: A serial job (single core, single node)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE        #Do not propagate environment
#SBATCH --get-user-env=L     #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH -J JobExample1       #Set the job name to "JobExample1"
#SBATCH -t 01:30:00          #Set the wall clock limit to 1hr and 30min
#SBATCH -N 1                 #Request 1 node
#SBATCH --ntasks-per-node=1  #Request 1 task/core per node
#SBATCH --mem=2560M          #Request 2560MB (2.5GB) per node
#SBATCH -o Example1Out.%j    #Send stdout/err to "Example1Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
#SBATCH -A 123456            #Set billing account to 123456
#SBATCH --mail-type=ALL      #Send email on all job events
#SBATCH --mail-user=email_address  #Send all emails to email_address

#First Executable Line

Example Job 2: A multi core, single node job

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE        #Do not propagate environment
#SBATCH --get-user-env=L     #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH -J JobExample2       #Set the job name to "JobExample2"
#SBATCH -t 6:30:00           #Set the wall clock limit to 6hr and 30min
#SBATCH -N 1                 #Request 1 node
#SBATCH --ntasks-per-node=8  #Request 8 tasks/cores per node
#SBATCH --mem=8192M          #Request 8192MB (8GB) per node 
#SBATCH -o Example2Out.%j    #Send stdout/err to "Example2Out.[jobID]" 

##OPTIONAL JOB SPECIFICATIONS
#SBATCH -A 123456            #Set billing account to 123456
#SBATCH --mail-type=ALL      #Send email on all job events
#SBATCH --mail-user=email_address  #Send all emails to email_address 

#First Executable Line

Example Job 3: A multi core, multi node job

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE        #Do not propagate environment
#SBATCH --get-user-env=L     #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH -J JobExample3       #Set the job name to "JobExample3"
#SBATCH -t 1-12:00:00        #Set the wall clock limit to 1 Day and 12hr
#SBATCH -N 4                 #Request 4 nodes
#SBATCH --ntasks-per-node=2  #Request 2 tasks/cores per node
#SBATCH --mem=4096M          #Request 4096MB (4GB) per node 
#SBATCH -o Example3Out.%j    #Send stdout/err to "Example3Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
#SBATCH -A 123456            #Set billing account to 123456
#SBATCH --mail-type=ALL      #Send email on all job events
#SBATCH --mail-user=email_address  #Send all emails to email_address

#First Executable Line

Example Job 4: A serial GPU job

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE  #Do not propagate environment
#SBATCH --get-user-env=L  #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH -J JobExample4    #Set the job name to "JobExample4"
#SBATCH -t 01:30:00    #Set the wall clock limit to 1hr and 30min
#SBATCH -N 1     #Request 1 node
#SBATCH --ntasks-per-node=1  #Request 1 task/core per node
#SBATCH --mem=2560M    #Request 2560MB (2.5GB) per node
#SBATCH -o Example4Out.%j #Send stdout/err to "Example4Out.[jobID]"
#SBATCH --gres=gpu:1   #Request 1 GPU
#SBATCH -p gpu      #Request the GPU partition/queue

##OPTIONAL JOB SPECIFICATIONS
#SBATCH -A 123456   #Set billing account to 123456
#SBATCH --mail-type=ALL   #Send email on all job events
#SBATCH --mail-user=email_address  #Send all emails to email_address

#First Executable Line

See more specialized job files (if available) at the HPRC Software page