Difference between revisions of "Ada:Batch Queues"
(→Queues) |
(→Queues) |
||
Line 53: | Line 53: | ||
| 2 / 2 / 120 | | 2 / 2 / 120 | ||
| 1 hr / 7 days | | 1 hr / 7 days | ||
− | | Maximum of ''' | + | | Maximum of '''7000''' cores for all running jobs in this queue. |
|- | |- | ||
| mn_medium | | mn_medium |
Revision as of 16:15, 12 June 2017
Queues
LSF, upon job submission, sends your jobs to appropriate batch queues. These are (software) service stations configured to control the scheduling and dispatch of jobs that have arrived in them. Batch queues are characterized by all sorts of parameters. Some of the most important are:
- the total number of jobs that can be concurrently running (number of run slots)
- the wall-clock time limit per job
- the type and number of nodes it can dispatch jobs to
- which users or user groups can use that queue; etc.
These settings control whether a job will remain idle in the queue or be dispatched quickly for execution.
The current queue structure (updated on April 19, 2017).
NOTE: Each user is now limited to 8000 cores total for his/her pending jobs across all the queues.
Queue | Job Min/Default/Max Cores | Job Default/Max Walltime | Compute Node Types | Per-Queue Limits | Aggregate Limits Across Queues | Per-User Limits Across Queues | Notes |
---|---|---|---|---|---|---|---|
sn_short | 1 / 1 / 20 | 10 min / 1 hr | 64 GB nodes (811) 256 GB nodes (26) |
Maximum of 6000 cores for all running jobs in the single-node (sn_*) queues. | Maximum of 1000 cores and 100 jobs per user for all running jobs in the single node (sn_*) queues. | For jobs needing only one compute node. | |
sn_regular | 1 hr / 1 day | ||||||
sn_long | 24 hr / 4 days | ||||||
sn_xlong | 4 days / 7 days | ||||||
mn_short | 2 / 2 / 200 | 10 min / 1 hr | Maximum of 2000 cores for all running jobs in this queue. | Maximum of 12000 cores for all running jobs in the multi-node (mn_*) queues. | Maximum of 3000 cores and 150 jobs per user for all running jobs in the multi-node (mn_*) queues. | For jobs needing more than one compute node. | |
mn_small | 2 / 2 / 120 | 1 hr / 7 days | Maximum of 7000 cores for all running jobs in this queue. | ||||
mn_medium | 121 / 121 / 600 | 1 hr / 7 days | Maximum of 6000 cores for all running jobs in this queue. | ||||
mn_large | 601 / 601 / 2000 | 1 hr / 5 days | Maximum of 8000 cores for all running jobs in this queue. | ||||
xlarge | 1 / 1 / 280 | 1 hr / 10 days | 1 TB nodes (11) 2 TB nodes (4) |
For jobs needing more than 256GB of memory per compute node. | |||
vnc | 1 / 1 / 20 | 1 hr / 6 hr | GPU nodes (30) | For remote visualization jobs. | |||
special | None | 1 hr / 7 days | 64 GB nodes (811) 256 GB nodes (26) |
Requires permission to access this queue. |
LSF determines which queue will receive a job for processing. The selection is determined mainly by the resources (e.g., number of cpus, wall-clock limit) specified, explicitly or by default. There are two exceptions:
- The xlarge queue that is associated with nodes that have 1TB or 2TB of main memory. To use it, submit jobs with the -q xlarge option along with -R "select[mem1tb]" or -R "select[mem2tb]"
- The special queue which gives one access to all of the compute nodes. You MUST request permission to get access to this queue.
To access either of the above queues, you must use the -q queue_name option in your job script.
Output from the bjobs command contains the name of the queue associated with a given job.