Skip to content

Batch System


The batch system is a load distribution implementation that ensures convenient and fair use of a shared resource. Submitting jobs to a batch system allows a user to reserve specific resources with minimal interference to other users. All users are required to submit resource-intensive processing to the compute nodes through the batch system - attempting to circumvent the batch system is not allowed.

On ACES, Slurm is the batch system that provides job management.

Building Job Files

While not the only method of submitted programs to be executed, job files fulfill the needs of most users.

The general idea behind job files follows:

  • Request resources
  • Add your commands and/or scripts to run
  • Submit the job to the batch system

In a job file, resource specification options are preceded by a script directive. For each batch system, this directive is different. On ACES (Slurm) this directive is #SBATCH. For every line of resource specifications, this directive must be the first text of the line, and all specifications must come before any executable lines. An example of a resource specification is given below:

#SBATCH --jobname=MyExample  #Set the job name to "MyExample"

Note: Comments in a job file also begin with a # but Slurm recognizes #SBATCH as a directive.

A list of the most commonly used and important options for these job files are given in the following section.

Basic Job Specifications

Several of the most important options are described below. These basic options are typically all that is needed to run a job on ACES.

Specification Option Example Example-Purpose
Wall Clock Limit --time=[hh:mm:ss] --time=05:00:00 Set wall clock limit to 5 hours 00 min
Job Name --job-name=[SomeText] --job-name=mpiJob Set the job name to "mpiJob"
Total Task/Core Count --ntasks=[#] --ntasks=96 Request 96 tasks/cores total
Tasks per Node --ntasks-per-node=# --ntasks-per-node=48 Request exactly (or max) of 48 tasks per node
Memory Per Node --mem=value[K|M|G|T] --mem=240G Request 240 GB per node
Combined stdout/stderr --output=[OutputName].%j --output=mpiOut.%j Collect stdout/err in mpiOut.[JobID]

It should be noted that Slurm divides processing resources as such: Nodes -> Cores/CPUs -> Tasks

A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task.

Optional Job Specifications

A variety of optional specifications are available to customize your job. The table below lists the specifications which are most useful for users of ACES.

Batch specifications for ACES comming soon.

Alternative Specifications

The job options within the above sections specify resources with the following method:

  • Cores and CPUs are equivalent
  • 1 Task per 1 CPU desired
  • You specify: desired number of tasks (equals number of CPUs)
  • You specify: desired number of tasks per node (equal or less than the total number cores per compute node)
  • You get: total nodes equal to #ofCPUs/#ofTasksPerNodes
  • You specify: desired Memory per node

Slurm allows users to specify resources in units of Tasks, CPUs, Sockets, and Nodes.

There are many overlapping settings and some settings may (quietly) overwrite the defaults of other settings. A good understanding of Slurm options is needed to correctly utilize these methods.

Alternative Memory/Core/Node Specifications





Node Count



Spread all tasks/cores across 4 nodes

CPUs per Task



Require 4 CPUs per task (default: 1)

Memory per CPU



Request 2000 MB per CPU
NOTE: The default is 1024 MB per CPU.

Memory per Node (All, Multi)


Request the least-max available memory for any node across all nodes

Tasks per Socket



Request max of 6 tasks per socket

Sockets per Node



Restrict to nodes with at least 2 sockets

If you want to make resource requests in an alternative format, you are free to do so. Our ability to support alternative resource request formats may be limited.

Environment Variables

All the nodes enlisted for the execution of a job carry most of the environment variables the login process created: HOME, SCRATCH, PWD, PATH, USER, etc. In addition, Slurm defines new ones in the environment of an executing job. Below is a list of most commonly used environment variables.

Variable Usage Description
Job ID $SLURM_JOBID Batch job ID assigned by Slurm.
Job Name $SLURM_JOB_NAME The name of the Job.
Queue $SLURM_JOB_PARTITION The name of the queue the job is dispatched from.
Submit Directory $SLURM_SUBMIT_DIR The directory the job was submitted from.
Temporary Directory $TMPDIR This is a directory assigned locally on the compute node for the job located at /tmp/job.$SLURM_JOBID. Use of $TMPDIR is recommended for jobs that use many small temporary files.

Basic Slurm Environment Variables

Note: To see all relevant Slurm environment variables for a job, add the following line to the executable section of a job file and submit that job. All the variables will be printed in the output file.

env | grep SLURM

Executable Commands

After the resource specification section of a job file comes the executable section. This executable section contains all the necessary UNIX, Linux, and program commands that will be run in the job. Some commands that may go in this section include, but are not limited to:

  • Changing directories
  • Loading, unloading, and listing modules
  • Launching software

An example of a possible executable section is below:

cd $SCRATCH      # Change current directory to /scratch/user/[username]/
ml purge         # Purge all modules
ml intel/2022a   # Load the intel/2022a module
ml               # List all currently loaded modules

./myProgram.o    # Run "myProgram.o"

For information on the module system or specific software, visit our Modules page and our Software page.

Job Submission

Once you have your job script ready, it is time to submit the job. You can submit your job to the Slurm batch scheduler using the sbatch command. For example, suppose you you created a batch file named MyJob.slurm, the command to submit the job will as follows:

[username@aces ~]$ sbatch MyJob.slurm
Submitted batch job 3606

Job Monitoring and Control Commands

After a job has been submitted, you may want to check on its progress or cancel it. Below is a list of the most used job monitoring and control commands for jobs on ACES.

Job Monitoring and Control Commands




Submit a job

sbatch [script_file]

sbatch FileName.job

Cancel/Kill a job

scancel [job_id]

scancel 101204

Check status of a single job

squeue --job [job_id]

squeue --job 101204

Check status of all
jobs for a user

squeue -u [user_name]

squeue -u User1

Check CPU and memory efficiency for a job
(Use only on finished jobs)

seff [job_id]

seff 101204

Here is an example of the information that the seff command provides for a completed job:

% seff 12345678
Job ID: 12345678
Cluster: ACES
User/Group: username/groupname
State: COMPLETED (exit code 0)
Nodes: 16
Cores per node: 28
CPU Utilized: 1-17:05:54
CPU Efficiency: 94.63% of 1-19:25:52 core-walltime
Job Wall-clock time: 00:05:49
Memory Utilized: 310.96 GB (estimated maximum)
Memory Efficiency: 34.70% of 896.00 GB (56.00 GB/node)

Job Examples

Several examples of Slurm job files for ACES are listed below.

NOTE: Job examples are NOT lists of commands, but are a template of the contents of a job file. These examples should be pasted into a text editor and submitted as a job to be tested, not entered as commands line by line.

There are several optional parameters available for jobs on ACES. In the examples below, they are commented out/ignored via ##. If you wish to include these values as parameters for your jobs, please change it to a singular # and adjust the parameter value accordingly.

Example Job 1: A serial job (single core, single node)


#SBATCH --job-name=JobExample1       #Set the job name to "JobExample1"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=5000M                  #Request 5000MB (5GB) per node
#SBATCH --output=Example1Out.%j      #Send stdout/err to "Example1Out.[jobID]"

##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address

#First Executable Line

Example Job 2: A multi core, single node job


#SBATCH --job-name=JobExample2       #Set the job name to "JobExample2"
#SBATCH --time=6:30:00               #Set the wall clock limit to 6hr and 30min
#SBATCH --nodes=1                    #Request 1 node
#SBATCH --ntasks-per-node=96         #Request 96 tasks/cores per node
#SBATCH --mem=488G                   #Request 488G (488GB) per node
#SBATCH --output=Example_SNMC_CPU.%j #Redirect stdout/err to file
#SBATCH --partition=cpu              #Specify partition to submit job to

##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address

#First Executable Line

Example Job 3: A multi core, multi node job


#SBATCH --job-name=Example_MNMC_CPU  #Set the job name to Example_MNMC_CPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=2                    #Request 2 nodes
#SBATCH --ntasks-per-node=96         #Request 64 tasks/cores per node
#SBATCH --mem=488G                   #Request 488G (488GB) per node
#SBATCH --output=Example_MNMC_CPU.%j #Redirect stdout/err to file
#SBATCH --partition=cpu              #Specify partition to submit job to

##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 4: A serial GPU job (single node, single core)


#SBATCH --job-name=Example_SNSC_GPU  #Set the job name to Example_SNSC_GPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=488G                   #Request 488G (488GB) per node
#SBATCH --output=Example_SNSC_GPU.%j #Redirect stdout/err to file
#SBATCH --partition=gpu              #Specify partition to submit job to
#SBATCH --gres=gpu:h100:1            #Specify GPU(s) per node, 1 H100 GPUs

##SBATCH --account=123456            
#Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 5: A serial GPU job (single node, multiple core)


#SBATCH --job-name=Example_SNMC_GPU  #Set the job name to Example_SNMC_GPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=1                    #Request 1 nodes
#SBATCH --ntasks-per-node=48         #Request 48 tasks/cores per node
#SBATCH --mem=488G                   #Request 488G (488GB) per node
#SBATCH --output=Example_SNMC_GPU.%j #Redirect stdout/err to file
#SBATCH --partition=gpu              #Specify partition to submit job to
#SBATCH --gres=gpu:h100:4            #Specify GPU(s) per node, 4 H100 GPUs

##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Example Job 6: A parallel GPU job (multiple node, multiple core)


#SBATCH --job-name=Example_MNMC_GPU  #Set the job name to Example_MNMC_GPU
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr 30min
#SBATCH --nodes=2                    #Request 2 nodes
#SBATCH --ntasks-per-node=48         #Request 48 tasks/cores per node
#SBATCH --mem=488G                   #Request 488G (488GB) per node
#SBATCH --output=Example_MNMC_GPU.%j #Redirect stdout/err to file
#SBATCH --partition=gpu              #Specify partition to submit job to
#SBATCH --gres=gpu:h100:1            #Specify GPU(s) per node, 1 H100 gpu

##SBATCH --account=123456            #Set billing account to 123456
##SBATCH --mail-type=ALL             #Send email on all job events
##SBATCH --mail-user=email_address   #Send all emails to email_address

#First Executable Line

Batch Queues

Upon job submission, Slurm sends your jobs to appropriate batch queues. These are (software) service stations configured to control the scheduling and dispatch of jobs that have arrived in them. Batch queues are characterized by all sorts of parameters. Some of the most important are:

  1. The total number of jobs that can be concurrently running (number of run slots)
  2. The wall-clock time limit per job
  3. The type and number of nodes available for jobs

These settings control whether a job will remain idle in the queue or be dispatched quickly for execution.

The current queue structure is: (updated on August 08, 2023).

Queue Name Max Nodes per Job (Max Cores) Max Devices Max Duration Max Jobs in Queue
cpu 64 nodes (6,144 cores) 0 7 days 50
gpu 15 nodes (1,440 cores) 30 2 days 50
atsp 1 node (96 cores) 10 2 days 50
bittware 1 node (96 cores) 2 2 days 50
d5005 1 node (96 cores) 1 2 days 50
memverge 1 node (96 cores) 1 2 days 50
nextsilicon* 1 node (96 cores) 1 2 days 50

* The nextsilicon queue is available upon request.

Checking queue usage

  • The sinfo command can be used to get information on queues and their nodes.
[username@aces ~]$ sinfo
    cpu*             up     7-00:00:00   1-32        0/63/3/75        0/6048/960/7008
    gpu              up     2-00:00:00   1-15        0/15/2/17        0/1344/288/1632
    atsp             up     2-00:00:00   1           0/5/1/6          0/384/192/576
    bittware         up     2-00:00:00   1           1/0/0/1          96/0/96/192
    d5005            up     2-00:00:00   1           0/2/0/2          0/192/0/192
    memverge         up     2-00:00:00   1           0/8/0/8          0/768/0/768

Note: A/I/O/T stands for Active, Idle, Offline, and Total respectively.

Checking node usage

  • The pestat command can be used to generate a list of nodes and their corresponding information, including their CPU usage.
[username@aces ~]$ pestat
    Hostname       Partition     Node Num_CPU  CPUload  Memsize  Freemem  Joblist
                                State Use/Tot  (15min)     (MB)     (MB)  JobID User ...
    ac001           memverge    idle    0  96    0.00    500000   511731   
    ac002           memverge    idle    0  96    0.00    500000   511741   
    ac003           memverge    idle    0  96    0.00    500000   511736   
    ac004           memverge    idle    0  96    0.00    500000   511730   
    ac005               atsp    idle    0  96    0.00    500000   510883   
    ac006               atsp    idle    0  96    0.00    500000   511819   
    ac007               atsp    idle    0  96    0.00    500000   511832   
    ac008              d5005    idle    0  96    0.00    500000   511815   
    ac009               cpu*    idle    0  96    0.00    500000   511705   
    ac010               cpu*    idle    0  96    0.00    500000   511812   
    ac011               cpu*    idle    0  96    0.00    500000   511823   
    ac012               cpu*    idle    0  96    0.00    500000   511825   
  • To generate a list of the nodes in the gpu queue and their current configuration:
[username@aces ~]$ pestat -p gpu -G
    Hostname       Partition     Node Num_CPU  CPUload  Memsize  Freemem  GRES/node         Joblist
                                State Use/Tot  (15min)     (MB)     (MB)                    JobID User GRES/job ...
    ac036                gpu    idle    0  96    0.00    500000   509372  gpu:h100:2(S:0-1)  
    ac037                gpu    idle    0  96    0.00    500000   510393  gpu:h100:2(S:0-1)  
    ac038                gpu    idle    0  96    0.00    500000   510545  gpu:h100:2(S:0-1)  
    ac039                gpu    idle    0  96    0.00    500000   511026  gpu:h100:2(S:0-1)  
    ac040                gpu    idle    0  96    0.00    500000   511063  gpu:h100:2(S:0-1)  
    ac046                gpu    idle    0  96    0.00    500000   510699  gpu:h100:2(S:0-1)  
    ac047                gpu    idle    0  96    0.00    500000   510945  gpu:h100:2(S:0-1)  
    ac048                gpu    idle    0  96    0.00    500000   510989  gpu:h100:2(S:0-1)  
    ac049                gpu    idle    0  96    0.00    500000   510397  gpu:h100:2(S:0-1)  
    ac050                gpu    idle    0  96    0.00    500000   510270  gpu:h100:2(S:0-1)  
  • The above table can be summarized using the gpuavail command
[username@aces ~]$ gpuavail
    NODE            NODE 
    TYPE            COUNT
    gpu:h100:2       15
    gpu:a30:6         1

    NODE       GPU    GPU    GPU
    ac040      h100    2      2
    ac064       a30    6      4

Checking bad nodes

  • The command can be used to view a current list of bad nodes on the machine.
  • The following output is just an example output and users should run to see a current list.
[username@aces ~]$
    REASON                   USER             TIMESTAMP            STATE        NODELIST
    testing DIMM B5          somebody         2023-07-01T12:31:22  drained*     ac017
    Not responding           slurm            2023-07-19T12:10:47  down*        ac001


Checkpointing is the practice of creating a save state of a job so that, if interrupted, it can begin again without starting completely over. This technique is especially important for long jobs on the batch systems, because each batch queue has a maximum walltime limit.

A checkpointed job file is particularly useful for the gpu queue, which is limited to 4 days walltime due to its demand. There are many cases of jobs that require the use of gpus and must run longer than two days, such as training a machine learning algorithm.

Users can change their code to implement save states so that their code may restart automatically when cut off by the wall time limit. There are many different ways to checkpoint a job file depending on the software used, but it is almost always done at the application level. It is up to the user how frequently save states are made depending on what kind of fault tolerance is needed for the job, but in the case of the batch system, the exact time of the 'fault' is known. It's just the walltime limit of the queue. In this case, only one checkpoint need be created, right before the limit is reached. Many different resources are available for checkpointing techniques. Some examples for common software are listed below. -->

Advanced Documentation

This guide only covers the most commonly used options and useful commands.

For more information, check the man pages for individual commands or the Slurm Documentation.

Back to top