Hprc banner tamu.png

Difference between revisions of "Ada:Batch Queues"

From TAMU HPRC
Jump to: navigation, search
(Queues)
(Queues)
Line 52: Line 52:
 
|-
 
|-
 
| mn_large
 
| mn_large
| 600 / 601 / 2000
+
| 601 / 601 / 2000
 
| 1 hr / 2 days
 
| 1 hr / 2 days
 
| Maximum of '''3000''' cores for all running jobs in this queue.  
 
| Maximum of '''3000''' cores for all running jobs in this queue.  

Revision as of 13:04, 23 September 2015

Queues

LSF, upon job submission, sends your jobs to appropriate batch queues. These are (software) service stations configured to control the scheduling and dispatch of jobs that have arrived in them. Batch queues are characterized by all sorts of parameters. Some of the most important are:

  1. the total number of jobs that can be concurrently running (number of run slots)
  2. the wall-clock time limit per job
  3. the type and number of nodes it can dispatch jobs to
  4. which users or user groups can use that queue; etc.

These settings control whether a job will lie idle in the queue or be dispatched quickly for execution.

The current (updated on Sep. 23 2015) queue structure. It is in flux.

Queue Min/Default/Max Cpus Default/Max Walltime Compute Node Types Notes
sn_short 1 / 1 / 20 10 min / 1 hr 64 GB and 256 GB nodes Maximum of 6000 cores for all running jobs in the single-node (sn_*) queues.
Maximum of 600 cores per-user for all running jobs in the sn_* queues.
sn_regular 1 hr / 1 day
sn_long 24 hr / 4 days
sn_xlong 4 days / 7 days
mn_short 2 / 2 / 200 10 min / 1 hr Maximum of 500 cores for all running jobs in this queue.
mn_small 2 / 2 / 120 1 hr / 7 days Maximum of 6000 cores for all running jobs in this queue.
mn_medium 121 / 121 / 600 1 hr / 7 days Maximum of 4000 cores for all running jobs in this queue.
mn_large 601 / 601 / 2000 1 hr / 2 days Maximum of 3000 cores for all running jobs in this queue.
xlarge 1 / 1 / 280 1 hr / 10 days 1 TB nodes (11), 2 TB nodes (4)
vnc 1 / 1 / 20 1 hr / 6 hr All 30 nodes with GPUs
special None 1 hr / 7 days 64 GB and 256 GB nodes Requires permission to access this queue.

LSF determines which queue will receive a job for processing. The selection is determined mainly by the resources (e.g., number of cpus, wall-clock limit) specified, explicitly or by default. There are two exceptions:

  1. The xlarge queue that is associated with nodes that have 1TB or 2TB of main memory. To use it, submit jobs with the -q xlarge option.
  2. The special queue which gives one access to all of the compute nodes. You MUST request permission to get access to this queue.

To access either of the above queues, you must use the -q queue_name option in your job script.

Output from the bjobs command contains the name of the queue associated with a given job.

Fair-Share Policy

... pending ...

Public and Private/Group Queues

... pending ...

The Interactive Queue

... pending ...