Hprc banner tamu.png

Ada:Usable Memory for Batch Jobs

From TAMU HPRC
Revision as of 09:00, 6 May 2021 by Mouse (talk | contribs) (Usable Memory for Batch Jobs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Usable Memory for Batch Jobs

While nodes on Ada have 64GB, 192GB or 256GB of RAM, some of this memory is used to maintain the software and operating system of the node. In most cases, LSF will not schedule jobs if it cannot find a node to satisfy an excessive memory request.

The table below contains information regarding the approximate limits of Ada memory hardware and our suggestions on its use.

Memory Limits of Nodes
64GB Nodes 256GB Nodes 192GB Nodes in v100 queue (*)
Node Count 811 26 4
Number of Cores 20 Cores (2 sockets x 10 cores) 24 Cores (2 sockets x 12 cores)
Memory Limit Per Core
(if using all cores per node)
2560 MB
2.5 GB
12400 MB
12 GB
7600 MB
7.4 GB
Memory Limit Per Node 56000 MB
54 GB
248000 MB
242 GB
184000 MB
180 GB
  • V100 nodes were moved to terra in preparation for the decommissioning of Ada

LSF may queue your job for an excessive time (or indefinitely) if waiting for some particular nodes with sufficient memory to become free.