Skip to content

maxconfig

maxconfig is available on all clusters and will display the recommended Slurm #SBATCH parameters for the maximum resource configuration (cores, memory, time) for specific partitions or accelerators.

Use the following command to see usage and SU charge rate per GPU

maxconfig -h

You can see the SUs that will be charged for your job script without submitting the job by using the -f option

maxconfig -f my_job_file.slurm
  • The values in the following examples vary between clusters.
  • Run maxconfig on the command line of the cluster you are using to see the maximum resources as well as the current GPU configuration and SU rates.
  • Use the current maxconfig output on each cluster instead of copying from the webpage.

Usage

maxconfig -h

maxconfig will print maximum CPU cores, available memory and walltime for the cluster compute nodes in #SBATCH format.
The SUs calculations will also be displayed for clusters where SU charging is enabled.

You can select a specific partition using -p partition_name

Example Usage:
  # select 1 x t4 GPU for 1 day (selecting 1 of 2 installed GPUs will scale available CPU cores and memory by 1/2)

  maxconfig -g t4 -G 1 -d 1

Options:
    -p partition  show a specific partition (cpu, gpu, atsp)
    -g gpu        show parameters for a specific GPU (a10  a100  a30  a40  t4  (default for -p gpu: t4))
    -G int        number of GPUs; must also specify GPU type with -g;
                      Note: when using -G, selecting fewer GPUs than are on a node will scale down CPUs and memory so the other GPUs are available to other jobs
    -1            show minimum CPU and memory parameters for 1 SU per hour rate (values for -c, -m, -g, -G, -d and -h will be ignored)
    -e email      add #SBATCH lines to receive email notifications about your job
    -f filename   estimate SUs for a job script file (ignores all other options)
    -d int        runtime days
    -h int        runtime hours
    -n int        nodes (default: 1)
    -t int        tasks per node (default: 1)
    -c int        cpus per task (default: 64)
    -m int        total memory in GB per node (default: 240)
    -s            output srun line for interactive jobs instead of #SBATCH parameters
    -h            show help

SU rate per GPU:
GPU     SUs per hour
----    ------------
a10     128
a100    128
a30     128
a40     128
t4       64

Example output

default

maxconfig

partitions:   cpu  gpu  atsp
GPUs in gpu partition:  a100:16  a100:4  a100:8  a10:2  a10:4  a30:2  a40:2  a40:4  t4:2  t4:4  t4:8

Showing max parameters (cores, mem, time) for partition cpu

CPU-billing * hours * nodes =   SUs
         64 *   168 *     1 = 10,752

#!/bin/bash
#SBATCH --job-name=my_job
#SBATCH --time=7-00:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=64
#SBATCH --mem=240G
#SBATCH --output=stdout.%x.%j
#SBATCH --error=stderr.%x.%j

gpu

select 1 x t4 GPU for 1 day

maxconfig -g t4 -G 1 -d 1

partitions:   cpu  gpu  atsp
GPUs in gpu partition:  a100:16  a100:4  a100:8  a10:2  a10:4  a30:2  a40:2  a40:4  t4:2  t4:4  t4:8

Showing 1/8 of total cores and memory for using 1 x t4 GPU

(CPU-billing + (GPU-billing * GPU-count)) * hours * nodes = SUs
(          9 + (         64 *         1)) *    24 *     1 = 1,752

#!/bin/bash
#SBATCH --job-name=my_job
#SBATCH --time=1-00:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=8
#SBATCH --mem=30G
#SBATCH --partition=gpu
#SBATCH --gres=gpu:t4:1
#SBATCH --output=stdout.%x.%j
#SBATCH --error=stderr.%x.%j
Back to top