Hprc banner tamu.png

Grace:Batch Queues

From TAMU HPRC
Revision as of 12:48, 11 January 2021 by Stutitrivedi1373 (talk | contribs) (Checking queue usage)
Jump to: navigation, search

Batch Queues

Upon job submission, Slurm sends your jobs to appropriate batch queues. These are (software) service stations configured to control the scheduling and dispatch of jobs that have arrived in them. Batch queues are characterized by all sorts of parameters. Some of the most important are:

  1. The total number of jobs that can be concurrently running (number of run slots)
  2. The wall-clock time limit per job
  3. The type and number of nodes it can dispatch jobs to

These settings control whether a job will remain idle in the queue or be dispatched quickly for execution.

The current queue structure is: (updated on January 11, 2021).

Queue Job Max Cores / Nodes Job Max Walltime Compute Node Types Per-User Limits Across Queues Notes
short 1536 cores / 32 nodes 2 hr 384 GB nodes (800) 6144 Cores per User
medium 6144 cores / 128 nodes 1 day
long 3072 cores / 64 nodes 7 days
xlong 1536 cores / 32 nodes 21 days For jobs needing to run longer than 7 days.

Submit jobs to this partition with the --partition xlong option.

GPU-A100 1344 cores / 48 nodes 4 days 100
GPU RTX6000 9
GPU T4 8
Bigmem 192 cores / 4 node 2 days 8


Checking queue usage

The following command can be used to get information on queues and their nodes.

[NetID@grace1 ~]$ sinfo

Example output:

PARTITION AVAIL TIMELIMIT JOB_SIZE NODES(A/I/O/T) CPUS(A/I/O/T)

short* up 2:00:00 1-32 32/763/5/800 1496/36664/240/38400


Note: A/I/O/T stands for Active, Idle, Offline, and Total

Checking node usage

The following command can be used to generate a list of nodes and their corresponding information, including their CPU usage.

[NetID@terra1 ~]$ pestat

Example output:

Hostname       Partition     Node Num_CPU  CPUload  Memsize  Freemem  Joblist
                          State Use/Tot              (MB)     (MB)  JobId User ...
knl-0101             knl   drain$   0  68    0.00*    88000        0   


Checkpointing

Checkpointing is the practice of creating a save state of a job so that, if interrupted, it can begin again without starting completely over. This technique is especially important for long jobs on the batch systems, because each batch queue has a maximum walltime limit.


A checkpointed job file is particularly useful for the gpu queue, which is limited to 2 days walltime due to its demand. There are many cases of jobs that require the use of gpus and must run longer than two days, such as training a machine learning algorithm.


Users can change their code to implement save states so that their code may restart automatically when cut off by the wall time limit. There are many different ways to checkpoint a job file depending on the software used, but it is almost always done at the application level. It is up to the user how frequently save states are made depending on what kind of fault tolerance is needed for the job, but in the case of the batch system, the exact time of the 'fault' is known. It's just the walltime limit of the queue. In this case, only one checkpoint need be created, right before the limit is reached. Many different resources are available for checkpointing techniques. Some examples for common software are listed below.