Difference between revisions of "Grace:Batch Job Files"
(→Executable Commands) |
(→Basic Job Specifications) |
||
Line 63: | Line 63: | ||
A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task. | A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task. | ||
</font> | </font> | ||
+ | |||
+ | Additional options for setting your environment. <font color=red>Warning</font>: these options are NOT COMPATIBLE with OpenMPI and other non-Intel MPI modules. | ||
+ | {| class="wikitable" style="text-align: center;" | ||
+ | |+ Basic Grace (Slurm) Job Specifications | ||
+ | |- | ||
+ | | Reset Env I | ||
+ | | --export=NONE | ||
+ | | | ||
+ | | Do not propagate environment to job | ||
+ | |- | ||
+ | | Reset Env II | ||
+ | | --get-user-env=L | ||
+ | | | ||
+ | | Replicate the login environment | ||
+ | |} | ||
+ | |||
+ | Example batch file template for an MPI job. Notice that the Environment options have been omitted, so this example can work with OpenMPI. | ||
<nowiki> | <nowiki> |
Revision as of 11:42, 3 May 2021
Contents
Building Job Files
While not the only method of submitted programs to be executed, job files fulfill the needs of most users.
The general idea behind job files follows:
- Make resource requests
- Add your commands and/or scripting
- Submit the job to the batch system
In a job file, resource specification options are preceded by a script directive. For each batch system, this directive is different. On Grace (Slurm) this directive is #SBATCH.
For every line of resource specifications, this directive must be the first text of the line, and all specifications must come before any executable lines.
An example of a resource specification is given below:
#SBATCH --jobname=MyExample #Set the job name to "MyExample"
Note: Comments in a job file also begin with a # but Slurm recognizes #SBATCH as a directive.
A list of the most commonly used and important options for these job files are given in the following section of this wiki. Full job file examples are given below.
Basic Job Specifications
Several of the most important options are described below. These basic options are typically all that is needed to run a job on Grace.
Specification | Option | Example | Example-Purpose |
---|---|---|---|
Wall Clock Limit | --time=[hh:mm:ss] | --time=05:00:00 | Set wall clock limit to 5 hours 00 min |
Job Name | --job-name=[SomeText] | --job-name=mpiJob | Set the job name to "mpiJob" |
Total Task/Core Count | --ntasks=[#] | --ntasks=96 | Request 96 tasks/cores total |
Tasks per Node I | --ntasks-per-node=# | --ntasks-per-node=48 | Request exactly (or max) of 48 tasks per node |
Memory Per Node | --mem=value[K|M|G|T] | --mem=360G | Request 360 GB per node |
Combined stdout/stderr | --output=[OutputName].%j | --output=mpiOut.%j | Collect stdout/err in mpiOut.[JobID] |
It should be noted that Slurm divides processing resources as such: Nodes -> Cores/CPUs -> Tasks
A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task.
Additional options for setting your environment. Warning: these options are NOT COMPATIBLE with OpenMPI and other non-Intel MPI modules.
Reset Env I | --export=NONE | Do not propagate environment to job | |
Reset Env II | --get-user-env=L | Replicate the login environment |
Example batch file template for an MPI job. Notice that the Environment options have been omitted, so this example can work with OpenMPI.
#!/bin/bash # ##NECESSARY JOB SPECIFICATIONS #SBATCH --job-name=mpiJob #SBATCH --time=5:00 #SBATCH --ntasks=96 #SBATCH --ntasks-per-node=48 #SBATCH --mem=360G #SBATCH --output=mpiOut.%j ## YOUR COMMANDS BELOW
Optional Job Specifications
A variety of optional specifications are available to customize your job. The table below lists the specifications which are most useful for users of Grace.
Specification | Option | Example | Example-Purpose |
---|---|---|---|
Set Allocation | --account=###### | --account=274839 | Set allocation to charge to 274839 |
Email Notification I | --mail-type=[type] | --mail-type=ALL | Send email on all events |
Email Notification II | --mail-user=[address] | --mail-user=howdy@tamu.edu | Send emails to howdy@tamu.edu |
Specify Queue | --partition=[queue] | --partition=gpu | Request only nodes in gpu subset |
Specify General Resource | --gres=[resource]:[count] | --gres=gpu:1 | Request one GPU per node |
Specify A100 GPU Resource | --gres=gpu:[a100]:[count] | --gres=gpu:a100:1 | Request one a100 GPU per node |
Specify RTX 6000 GPU Resource | --gres=gpu:[rtx]:[count] | --gres=gpu:rtx:2 | Request two RTX 6000 GPUs per node |
Specify T4 GPU Resource | --gres=gpu:t4:[count] | --gres=gpu:t4:4 | Request four T4 GPUs per node |
Submit Test Job | --test-only | Submit test job for Slurm validation | |
Request Temp Disk | --tmp=M | --tmp=10240 | Request at least 10 GB in temp disk space |
Request License | --licenses=[LicenseLoc] | --licenses=nastran@slurmdb:12 |
Alternative Specifications
The job options within the above sections specify resources with the following method:
- Cores and CPUs are equivalent
- 1 Task per 1 CPU desired
- You specify: desired number of tasks (equals number of CPUs)
- You specify: desired number of tasks per node (equal or less than the 28 cores per compute node)
- You get: total nodes equal to #ofCPUs/#ofTasksPerNodes
- You specify: desired Memory per node
Slurm allows users to specify resources in units of Tasks, CPUs, Sockets, and Nodes.
There are many overlapping settings and some settings may (quietly) overwrite the defaults of other settings. A good understanding of Slurm options is needed to correctly utilize these methods.
Specification | Option | Example | Example-Purpose |
---|---|---|---|
Node Count | --nodes=[min[-max]] | --nodes=4 | Spread all tasks/cores across 4 nodes |
CPUs per Task | --cpus-per-task=# | --cpus-per-task=4 | Require 4 CPUs per task (default: 1) |
Memory per CPU | --mem-per-cpu=MB | --mem-per-cpu=2000 | Request 2000 MB per CPU NOTE: If this parameter is less than 1024, SLURM will misinterpret it as 0 |
Tasks per Core | --ntasks-per-core=# | --ntasks-per-core=4 | Request max of 4 tasks per core |
Tasks per Node II | --tasks-per-node=# | --tasks-per-node=5 | Equivalent to Tasks per Node I |
Tasks per Socket | --ntasks-per-socket=# | --ntasks-per-socket=6 | Request max of 6 tasks per socket |
Sockets per Node | --sockets-per-node=# | --sockets-per-node=2 | Restrict to nodes with at least 2 sockets |
If you want to make resource requests in an alternative format, you are free to do so. Our ability to support alternative resource request formats may be limited.
Using Other Job Options
Slurm has facilities to make advanced resources requests and change settings that most Grace users do not need. These options are beyond the scope of this guide.
If you wish to explore the advanced job options, see the Advanced Documentation.
Environment Variables
All the nodes enlisted for the execution of a job carry most of the environment variables the login process created: HOME, SCRATCH, PWD, PATH, USER, etc. In addition, Slurm defines new ones in the environment of an executing job. Below is a list of most commonly used environment variables.
Variable | Usage | Description |
---|---|---|
Job ID | $SLURM_JOBID | Batch job ID assigned by Slurm. |
Job Name | $SLURM_JOB_NAME | The name of the Job. |
Queue | $SLURM_JOB_PARTITION | The name of the queue the job is dispatched from. |
Submit Directory | $SLURM_SUBMIT_DIR | The directory the job was submitted from. |
Temporary Directory | $TMPDIR | This is a directory assigned locally on the compute node for the job located at /work/job.$SLURM_JOBID. Use of $TMPDIR is recommended for jobs that use many small temporary files. |
Note: To see all relevant Slurm environment variables for a job, add the following line to the executable section of a job file and submit that job. All the variables will be printed in the output file.
env | grep SLURM
Clarification on Memory, Core, and Node Specifications
Memory Specifications are IMPORTANT.
For examples on calculating memory, core, and/or node specifications on Grace: Specification Clarification.
Executable Commands
After the resource specification section of a job file comes the executable section. This executable section contains all the necessary UNIX, Linux, and program commands that will be run in the job.
Some commands that may go in this section include, but are not limited to:
- Changing directories
- Loading, unloading, and listing modules
- Launching software
An example of a possible executable section is below:
cd $SCRATCH # Change current directory to /scratch/user/[netID]/ ml purge # Purge all modules ml intel/2020b # Load the intel/2020b module ml # List all currently loaded modules ./myProgram.o # Run "myProgram.o"
For information on the module system or specific software, visit our Modules page and our Software page.