Hprc banner tamu.png

Difference between revisions of "Grace:Batch Job Examples"

From TAMU HPRC
Jump to: navigation, search
(Job File Examples)
Line 9: Line 9:
 
<span style="font-size:120%;">'''Example Job 1:'''</span> A serial job (single core, single node)<br />
 
<span style="font-size:120%;">'''Example Job 1:'''</span> A serial job (single core, single node)<br />
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
#SBATCH --export=NONE                #Do not propagate environment
 
#SBATCH --get-user-env=L            #Replicate login environment
 
 
   
 
   
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
Line 29: Line 26:
 
<span style="font-size:120%;">'''Example Job 2:'''</span> A multi core, single node job<br />
 
<span style="font-size:120%;">'''Example Job 2:'''</span> A multi core, single node job<br />
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
+
 
#SBATCH --export=NONE                #Do not propagate environment
 
#SBATCH --get-user-env=L            #Replicate login environment
 
 
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
 
  #SBATCH --job-name=JobExample2      #Set the job name to "JobExample2"
 
  #SBATCH --job-name=JobExample2      #Set the job name to "JobExample2"
Line 50: Line 44:
 
<span style="font-size:120%;">'''Example Job 3:'''</span> A multi core, multi node job<br />
 
<span style="font-size:120%;">'''Example Job 3:'''</span> A multi core, multi node job<br />
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
#SBATCH --export=NONE                #Do not propagate environment
 
#SBATCH --get-user-env=L            #Replicate login environment
 
 
   
 
   
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
Line 71: Line 62:
 
<span style="font-size:120%;">'''Example Job 4:'''</span> A serial GPU job<br />
 
<span style="font-size:120%;">'''Example Job 4:'''</span> A serial GPU job<br />
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
#SBATCH --export=NONE                #Do not propagate environment
 
#SBATCH --get-user-env=L            #Replicate login environment
 
 
   
 
   
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
Line 93: Line 81:
 
<span style="font-size:120%;">'''Example Job 5:'''</span> A serial GPU job with a specific GPU type <br />
 
<span style="font-size:120%;">'''Example Job 5:'''</span> A serial GPU job with a specific GPU type <br />
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
#SBATCH --export=NONE                #Do not propagate environment
 
#SBATCH --get-user-env=L            #Replicate login environment
 
 
   
 
   
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS
Line 115: Line 100:
 
<span style="font-size:120%;">'''Example Job 6:'''</span> A parallel GPU job<br />
 
<span style="font-size:120%;">'''Example Job 6:'''</span> A parallel GPU job<br />
 
  #!/bin/bash
 
  #!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
#SBATCH --export=NONE                #Do not propagate environment
 
#SBATCH --get-user-env=L            #Replicate login environment
 
 
   
 
   
 
  ##NECESSARY JOB SPECIFICATIONS
 
  ##NECESSARY JOB SPECIFICATIONS

Revision as of 14:56, 4 June 2021

Job File Examples

Several examples of Slurm job files for Grace are listed below. For translating Ada (LSF) job files, the Batch Job Translation Guide provides some reference.

NOTE: Job examples are NOT lists of commands, but are a template of the contents of a job file. These examples should be pasted into a text editor and submitted as a job to be tested, not entered as commands line by line.

There are several optional parameters available for jobs on Grace. In the examples below, they are commented out/ignored via ##. If you wish to include these values as parameters for your jobs, please change it to a singular # and adjust the parameter value accordingly.

Example Job 1: A serial job (single core, single node)

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample1       #Set the job name to "JobExample1"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example1Out.%j      #Send stdout/err to "Example1Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address

#First Executable Line

Example Job 2: A multi core, single node job

#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample2       #Set the job name to "JobExample2"
#SBATCH --time=6:30:00               #Set the wall clock limit to 6hr and 30min
#SBATCH --nodes=1                    #Request 1 node
#SBATCH --ntasks-per-node=8          #Request 8 tasks/cores per node
#SBATCH --mem=8G                     #Request 8GB per node 
#SBATCH --output=Example2Out.%j      #Send stdout/err to "Example2Out.[jobID]" 

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

Example Job 3: A multi core, multi node job

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample3       #Set the job name to "JobExample3"
#SBATCH --time=1-12:00:00            #Set the wall clock limit to 1 Day and 12hr
#SBATCH --ntasks=8                   #Request 8 tasks
#SBATCH --ntasks-per-node=2          #Request 2 tasks/cores per node
#SBATCH --mem=4096M                  #Request 4096MB (4GB) per node 
#SBATCH --output=Example3Out.%j      #Send stdout/err to "Example3Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

Example Job 4: A serial GPU job

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample4       #Set the job name to "JobExample4"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example4Out.%j      #Send stdout/err to "Example4Out.[jobID]"
#SBATCH --gres=gpu:1                 #Request 1 GPU per node can be 1 or 2
#SBATCH --partition=gpu              #Request the GPU partition/queue

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

Example Job 5: A serial GPU job with a specific GPU type

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample5       #Set the job name to "JobExample4"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example4Out.%j      #Send stdout/err to "Example4Out.[jobID]"
#SBATCH --gres=gpu:rtx:1             #Request 1 "rtx" GPU per node 
#SBATCH --partition=gpu              #Request the GPU partition/queue

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line

Example Job 6: A parallel GPU job

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample6       #Set the job name to "JobExample5"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=28                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example5Out.%j      #Send stdout/err to "Example5Out.[jobID]"
#SBATCH --gres=gpu:2                 #Request 2 GPU per node can be 1 or 2
#SBATCH --partition=gpu              #Request the GPU partition/queue

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address 

#First Executable Line


See more specialized job files (if available) at the HPRC Software page