Difference between revisions of "Terra:Batch Job Files"
(→Alternative Specifications) |
(→Basic Job Specifications: removed comment after #!/bin/bash because it causes job to crash with the error /bin/bash: # comment: No such file or directory) |
||
(33 intermediate revisions by 7 users not shown) | |||
Line 7: | Line 7: | ||
* Add your commands and/or scripting | * Add your commands and/or scripting | ||
* Submit the job to the batch system | * Submit the job to the batch system | ||
+ | |||
+ | In a job file, resource specification options are preceded by a ''script directive''. For each batch system, this directive is different. On Terra (Slurm) this directive is '''#SBATCH'''.<br> | ||
+ | <font color=teal> For every line of resource specifications, this directive '''must''' be the first text of the line, and '''all specifications must come before any executable lines'''.</font> | ||
+ | An example of a resource specification is given below: | ||
+ | #SBATCH --jobname=MyExample #Set the job name to "MyExample" | ||
+ | Note: Comments in a job file also begin with a '''#''' but Slurm recognizes '''#SBATCH''' as a directive.<br> | ||
+ | |||
+ | A list of the most commonly used and important options for these job files are given in the following section of this wiki. Full job file examples are given [[Terra:Batch#Job_File_Examples | below]]. | ||
=== Basic Job Specifications === | === Basic Job Specifications === | ||
Line 13: | Line 21: | ||
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
− | |+ Basic Terra | + | |+ Basic Terra (Slurm) Job Specifications |
|- | |- | ||
! style="width: 130pt;" | Specification | ! style="width: 130pt;" | Specification | ||
Line 31: | Line 39: | ||
|- | |- | ||
| Wall Clock Limit | | Wall Clock Limit | ||
− | | - | + | | --time=[hh:mm:ss] |
− | | - | + | | --time=05:00:00 |
− | | Set wall clock limit to | + | | Set wall clock limit to 5 hour 0 min |
|- | |- | ||
| Job Name | | Job Name | ||
− | | - | + | | --job-name=[SomeText] |
− | | - | + | | --job-name=mpiJob |
| Set the job name to "mpiJob" | | Set the job name to "mpiJob" | ||
|- | |- | ||
− | | | + | | Total Task/Core Count |
− | | - | + | | --ntasks=[#] |
− | | - | + | | --ntasks=56 |
− | | | + | | Request 56 tasks/cores total |
|- | |- | ||
| Tasks per Node I | | Tasks per Node I | ||
| --ntasks-per-node=# | | --ntasks-per-node=# | ||
− | | --ntasks-per-node= | + | | --ntasks-per-node=28 |
− | | Request exactly (or max) of | + | | Request exactly (or max) of 28 tasks per node |
|- | |- | ||
| Memory Per Node | | Memory Per Node | ||
− | | <nowiki>--mem=[K|M|G|T]</nowiki> | + | | <nowiki>--mem=value[K|M|G|T]</nowiki> |
− | | --mem= | + | | --mem=32G |
− | | Request | + | | Request 32 GB per node |
|- | |- | ||
| Combined stdout/stderr | | Combined stdout/stderr | ||
− | | - | + | | --output=[OutputName].%j |
− | | - | + | | --output=mpiOut.%j |
| Collect stdout/err in mpiOut.[JobID] | | Collect stdout/err in mpiOut.[JobID] | ||
|} | |} | ||
Line 65: | Line 73: | ||
A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task. | A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task. | ||
</font> | </font> | ||
+ | |||
+ | [[File:OOjs UI icon lightbulb-20 fc3.svg|18px|Note|link=]] '''Note''' | ||
+ | To submit batch scripts using non-Intel MPI toolchains, you must omit the '''Reset Env I''' and '''Reset Env II''' parameters from your batch script: | ||
+ | <nowiki> | ||
+ | #INCOMPATIBLE WITH OpenMPI/NON-INTEL MPI #COMPATIBLE WITH OpenMPI/NON-INTEL MPI | ||
+ | #!/bin/bash | ||
+ | ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION | ||
+ | #SBATCH --export=NONE #Do not propagate environment ##SBATCH --export=NONE #Do not propagate environment OMIT THIS | ||
+ | #SBATCH --get-user-env=L #Replicate login environment ##SBATCH --get-user-env=L #Replicate login environment OMIT THIS | ||
+ | |||
+ | ##NECESSARY JOB SPECIFICATIONS ##NECESSARY JOB SPECIFICATIONS | ||
+ | #SBATCH --job-name=jobname #SBATCH --job-name=jobname | ||
+ | #SBATCH --time=5:00 #SBATCH --time=5:00 | ||
+ | #SBATCH --ntasks=56 #SBATCH --ntasks=56 | ||
+ | #SBATCH --ntasks-per-node=28 #SBATCH --ntasks-per-node=28 | ||
+ | #SBATCH --mem=32G #SBATCH --mem=32G | ||
+ | #SBATCH --output=example.%j #SBATCH --output=example.%j | ||
+ | |||
+ | ## YOUR COMMANDS BELOW ## YOUR COMMANDS BELOW | ||
+ | |||
+ | </nowiki> | ||
=== Optional Job Specifications === | === Optional Job Specifications === | ||
Line 79: | Line 108: | ||
|- | |- | ||
| Set Allocation | | Set Allocation | ||
− | | - | + | | --account=###### |
− | | - | + | | --account=274839 |
| Set allocation to charge to 274839 | | Set allocation to charge to 274839 | ||
|- | |- | ||
Line 94: | Line 123: | ||
|- | |- | ||
| Specify Queue | | Specify Queue | ||
− | | - | + | | --partition=[queue] |
− | | - | + | | --partition=gpu |
| Request only nodes in gpu subset | | Request only nodes in gpu subset | ||
+ | |- | ||
+ | | Specify General Resource | ||
+ | | --gres=[resource]:[count] | ||
+ | | --gres=gpu:1 | ||
+ | | Request one GPU per node | ||
+ | |- | ||
+ | | Specify a specific gpu type | ||
+ | | --gres=gpu:[type]:[count] | ||
+ | | --gres=gpu:v100:1 | ||
+ | | Request v100 gpu: type=k80 or v100 | ||
|- | |- | ||
| Submit Test Job | | Submit Test Job | ||
Line 109: | Line 148: | ||
|- | |- | ||
| Request License | | Request License | ||
− | | - | + | | --licenses=[LicenseLoc] |
− | | - | + | | --licenses=nastran@slurmdb:12 |
| | | | ||
|} | |} | ||
Line 118: | Line 157: | ||
The job options within the above sections specify resources with the following method: | The job options within the above sections specify resources with the following method: | ||
* Cores and CPUs are equivalent | * Cores and CPUs are equivalent | ||
− | * 1 Task per 1 CPU | + | * 1 Task per 1 CPU desired |
− | * '''You specify:''' desired number of | + | * '''You specify:''' desired number of '''tasks''' (equals number of CPUs) |
− | * '''You specify:''' desired number of | + | * '''You specify:''' desired number of '''tasks per node''' (equal or less than the 28 cores per compute node) |
− | * '''You get:''' | + | * '''You get:''' total nodes equal to ''#ofCPUs/#ofTasksPerNodes'' |
* '''You specify:''' desired Memory per node | * '''You specify:''' desired Memory per node | ||
Line 136: | Line 175: | ||
! style="width: 225pt;" | Example-Purpose | ! style="width: 225pt;" | Example-Purpose | ||
|- | |- | ||
− | | | + | | Node Count |
− | | -- | + | | --nodes=[min[-max]] |
− | | -- | + | | --nodes=4 |
− | | | + | | Spread all tasks/cores across 4 nodes |
|- | |- | ||
| CPUs per Task | | CPUs per Task | ||
− | | - | + | | --cpus-per-task=# |
− | | - | + | | --cpus-per-task=4 |
| Require 4 CPUs per task (default: 1) | | Require 4 CPUs per task (default: 1) | ||
|- | |- | ||
Line 149: | Line 188: | ||
| --mem-per-cpu=MB | | --mem-per-cpu=MB | ||
| --mem-per-cpu=2000 | | --mem-per-cpu=2000 | ||
− | | Request 2000 MB per CPU | + | | Request 2000 MB per CPU <bR> <font color=purple>NOTE: If this parameter is less than 1024, SLURM will misinterpret it as 0</font> |
− | |- | + | |- <!-- |
| Memory per Node (All, Single) | | Memory per Node (All, Single) | ||
| --mem=0 | | --mem=0 | ||
Line 160: | Line 199: | ||
| | | | ||
| Request the least-max available memory for any node across all nodes | | Request the least-max available memory for any node across all nodes | ||
− | |- | + | |- YANG SAYS NO SOUP FOR YOU--> |
| Tasks per Core | | Tasks per Core | ||
| --ntasks-per-core=# | | --ntasks-per-core=# | ||
| --ntasks-per-core=4 | | --ntasks-per-core=4 | ||
| Request max of 4 tasks per core | | Request max of 4 tasks per core | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| Tasks per Node II | | Tasks per Node II | ||
Line 193: | Line 227: | ||
Slurm has facilities to make advanced resources requests and change settings that most Terra users do not need. These options are beyond the scope of this guide. | Slurm has facilities to make advanced resources requests and change settings that most Terra users do not need. These options are beyond the scope of this guide. | ||
− | If you wish to explore the advanced job options, see the [[Terra:Batch | + | If you wish to explore the advanced job options, see the [[Terra:Batch#Advanced_Documentation | Advanced Documentation]]. |
+ | |||
+ | === Environment Variables === | ||
+ | All the nodes enlisted for the execution of a job carry most of the environment variables the login process created: '''HOME, SCRATCH, PWD, PATH, USER,''' etc. In addition, Slurm defines new ones in the environment of an executing job. Below is a list of most commonly used environment variables. | ||
+ | |||
+ | {| class="wikitable" style="text-align: center;" | ||
+ | |+ Basic Slurm Environment Variables | ||
+ | |- | ||
+ | ! style="width: 130pt;" | Variable | ||
+ | ! style="width: 130pt;" | Usage | ||
+ | ! Description | ||
+ | |- | ||
+ | | Job ID | ||
+ | | $SLURM_JOBID | ||
+ | | Batch job ID assigned by Slurm. | ||
+ | |- | ||
+ | | Job Name | ||
+ | | $SLURM_JOB_NAME | ||
+ | | The name of the Job. | ||
+ | |- | ||
+ | | Queue | ||
+ | | $SLURM_JOB_PARTITION | ||
+ | | The name of the queue the job is dispatched from. | ||
+ | |- | ||
+ | | Submit Directory | ||
+ | | $SLURM_SUBMIT_DIR | ||
+ | | The directory the job was submitted from. | ||
+ | |- | ||
+ | | Temporary Directory | ||
+ | | $TMPDIR | ||
+ | | This is a directory assigned locally on the compute node for the job located at '''/work/job.$SLURM_JOBID'''. Use of '''$TMPDIR''' is recommended for jobs that use many small temporary files. | ||
+ | |} | ||
+ | |||
+ | '''Note:''' To see all relevant Slurm environment variables for a job, add the following line to the '''executable section''' of a job file and submit that job. All the variables will be printed in the output file. | ||
+ | env | grep SLURM | ||
=== Clarification on Memory, Core, and Node Specifications === | === Clarification on Memory, Core, and Node Specifications === | ||
Line 199: | Line 267: | ||
Memory Specifications are <font color=teal>IMPORTANT</font>. <br> | Memory Specifications are <font color=teal>IMPORTANT</font>. <br> | ||
For examples on calculating memory, core, and/or node specifications on Terra: [[:Terra:Batch_Memory_Specs | Specification Clarification]]. | For examples on calculating memory, core, and/or node specifications on Terra: [[:Terra:Batch_Memory_Specs | Specification Clarification]]. | ||
+ | |||
+ | === Executable Commands === | ||
+ | |||
+ | After the resource specification section of a job file comes the executable section. This executable section contains all the necessary UNIX, Linux, and program commands that will be run in the job. <br> | ||
+ | Some commands that may go in this section include, but are not limited to: | ||
+ | * Changing directories | ||
+ | * Loading, unloading, and listing modules | ||
+ | * Launching software | ||
+ | |||
+ | An example of a possible executable section is below: | ||
+ | cd $SCRATCH # Change current directory to ''/scratch/user/[netID]/'' | ||
+ | ml purge # Purge all modules | ||
+ | ml intel/2016b # Load the intel/2016b module | ||
+ | ml # List all currently loaded modules | ||
+ | |||
+ | ./myProgram.o # Run "myProgram.o" | ||
+ | |||
+ | For information on the module system or specific software, visit our [[SW:Modules | Modules]] page and our [[SW | Software]] page. | ||
[[Category: Terra]] | [[Category: Terra]] |
Latest revision as of 09:58, 25 September 2021
Contents
Building Job Files
While not the only method of submitted programs to be executed, job files fulfill the needs of most users.
The general idea behind job files follows:
- Make resource requests
- Add your commands and/or scripting
- Submit the job to the batch system
In a job file, resource specification options are preceded by a script directive. For each batch system, this directive is different. On Terra (Slurm) this directive is #SBATCH.
For every line of resource specifications, this directive must be the first text of the line, and all specifications must come before any executable lines.
An example of a resource specification is given below:
#SBATCH --jobname=MyExample #Set the job name to "MyExample"
Note: Comments in a job file also begin with a # but Slurm recognizes #SBATCH as a directive.
A list of the most commonly used and important options for these job files are given in the following section of this wiki. Full job file examples are given below.
Basic Job Specifications
Several of the most important options are described below. These basic options are typically all that is needed to run a job on Terra.
Specification | Option | Example | Example-Purpose |
---|---|---|---|
Reset Env I | --export=NONE | Do not propagate environment to job | |
Reset Env II | --get-user-env=L | Replicate the login environment | |
Wall Clock Limit | --time=[hh:mm:ss] | --time=05:00:00 | Set wall clock limit to 5 hour 0 min |
Job Name | --job-name=[SomeText] | --job-name=mpiJob | Set the job name to "mpiJob" |
Total Task/Core Count | --ntasks=[#] | --ntasks=56 | Request 56 tasks/cores total |
Tasks per Node I | --ntasks-per-node=# | --ntasks-per-node=28 | Request exactly (or max) of 28 tasks per node |
Memory Per Node | --mem=value[K|M|G|T] | --mem=32G | Request 32 GB per node |
Combined stdout/stderr | --output=[OutputName].%j | --output=mpiOut.%j | Collect stdout/err in mpiOut.[JobID] |
It should be noted that Slurm divides processing resources as such: Nodes -> Cores/CPUs -> Tasks
A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task.
Note
To submit batch scripts using non-Intel MPI toolchains, you must omit the Reset Env I and Reset Env II parameters from your batch script:
#INCOMPATIBLE WITH OpenMPI/NON-INTEL MPI #COMPATIBLE WITH OpenMPI/NON-INTEL MPI #!/bin/bash ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION #SBATCH --export=NONE #Do not propagate environment ##SBATCH --export=NONE #Do not propagate environment OMIT THIS #SBATCH --get-user-env=L #Replicate login environment ##SBATCH --get-user-env=L #Replicate login environment OMIT THIS ##NECESSARY JOB SPECIFICATIONS ##NECESSARY JOB SPECIFICATIONS #SBATCH --job-name=jobname #SBATCH --job-name=jobname #SBATCH --time=5:00 #SBATCH --time=5:00 #SBATCH --ntasks=56 #SBATCH --ntasks=56 #SBATCH --ntasks-per-node=28 #SBATCH --ntasks-per-node=28 #SBATCH --mem=32G #SBATCH --mem=32G #SBATCH --output=example.%j #SBATCH --output=example.%j ## YOUR COMMANDS BELOW ## YOUR COMMANDS BELOW
Optional Job Specifications
A variety of optional specifications are available to customize your job. The table below lists the specifications which are most useful for users of Terra.
Specification | Option | Example | Example-Purpose |
---|---|---|---|
Set Allocation | --account=###### | --account=274839 | Set allocation to charge to 274839 |
Email Notification I | --mail-type=[type] | --mail-type=ALL | Send email on all events |
Email Notification II | --mail-user=[address] | --mail-user=howdy@tamu.edu | Send emails to howdy@tamu.edu |
Specify Queue | --partition=[queue] | --partition=gpu | Request only nodes in gpu subset |
Specify General Resource | --gres=[resource]:[count] | --gres=gpu:1 | Request one GPU per node |
Specify a specific gpu type | --gres=gpu:[type]:[count] | --gres=gpu:v100:1 | Request v100 gpu: type=k80 or v100 |
Submit Test Job | --test-only | Submit test job for Slurm validation | |
Request Temp Disk | --tmp=M | --tmp=10240 | Request at least 10 GB in temp disk space |
Request License | --licenses=[LicenseLoc] | --licenses=nastran@slurmdb:12 |
Alternative Specifications
The job options within the above sections specify resources with the following method:
- Cores and CPUs are equivalent
- 1 Task per 1 CPU desired
- You specify: desired number of tasks (equals number of CPUs)
- You specify: desired number of tasks per node (equal or less than the 28 cores per compute node)
- You get: total nodes equal to #ofCPUs/#ofTasksPerNodes
- You specify: desired Memory per node
Slurm allows users to specify resources in units of Tasks, CPUs, Sockets, and Nodes.
There are many overlapping settings and some settings may (quietly) overwrite the defaults of other settings. A good understanding of Slurm options is needed to correctly utilize these methods.
Specification | Option | Example | Example-Purpose |
---|---|---|---|
Node Count | --nodes=[min[-max]] | --nodes=4 | Spread all tasks/cores across 4 nodes |
CPUs per Task | --cpus-per-task=# | --cpus-per-task=4 | Require 4 CPUs per task (default: 1) |
Memory per CPU | --mem-per-cpu=MB | --mem-per-cpu=2000 | Request 2000 MB per CPU NOTE: If this parameter is less than 1024, SLURM will misinterpret it as 0 |
Tasks per Core | --ntasks-per-core=# | --ntasks-per-core=4 | Request max of 4 tasks per core |
Tasks per Node II | --tasks-per-node=# | --tasks-per-node=5 | Equivalent to Tasks per Node I |
Tasks per Socket | --ntasks-per-socket=# | --ntasks-per-socket=6 | Request max of 6 tasks per socket |
Sockets per Node | --sockets-per-node=# | --sockets-per-node=2 | Restrict to nodes with at least 2 sockets |
If you want to make resource requests in an alternative format, you are free to do so. Our ability to support alternative resource request formats may be limited.
Using Other Job Options
Slurm has facilities to make advanced resources requests and change settings that most Terra users do not need. These options are beyond the scope of this guide.
If you wish to explore the advanced job options, see the Advanced Documentation.
Environment Variables
All the nodes enlisted for the execution of a job carry most of the environment variables the login process created: HOME, SCRATCH, PWD, PATH, USER, etc. In addition, Slurm defines new ones in the environment of an executing job. Below is a list of most commonly used environment variables.
Variable | Usage | Description |
---|---|---|
Job ID | $SLURM_JOBID | Batch job ID assigned by Slurm. |
Job Name | $SLURM_JOB_NAME | The name of the Job. |
Queue | $SLURM_JOB_PARTITION | The name of the queue the job is dispatched from. |
Submit Directory | $SLURM_SUBMIT_DIR | The directory the job was submitted from. |
Temporary Directory | $TMPDIR | This is a directory assigned locally on the compute node for the job located at /work/job.$SLURM_JOBID. Use of $TMPDIR is recommended for jobs that use many small temporary files. |
Note: To see all relevant Slurm environment variables for a job, add the following line to the executable section of a job file and submit that job. All the variables will be printed in the output file.
env | grep SLURM
Clarification on Memory, Core, and Node Specifications
Memory Specifications are IMPORTANT.
For examples on calculating memory, core, and/or node specifications on Terra: Specification Clarification.
Executable Commands
After the resource specification section of a job file comes the executable section. This executable section contains all the necessary UNIX, Linux, and program commands that will be run in the job.
Some commands that may go in this section include, but are not limited to:
- Changing directories
- Loading, unloading, and listing modules
- Launching software
An example of a possible executable section is below:
cd $SCRATCH # Change current directory to /scratch/user/[netID]/ ml purge # Purge all modules ml intel/2016b # Load the intel/2016b module ml # List all currently loaded modules ./myProgram.o # Run "myProgram.o"
For information on the module system or specific software, visit our Modules page and our Software page.