Hprc banner tamu.png

Ada:Usable Memory for Batch Jobs

From TAMU HPRC
Revision as of 17:33, 3 January 2020 by Francis (talk | contribs) (Usable Memory for Batch Jobs)
Jump to: navigation, search

Usable Memory for Batch Jobs

While nodes on Ada have 64GB, 192GB or 256GB of RAM, some of this memory is used to maintain the software and operating system of the node. In most cases, LSF will not schedule jobs if it cannot find a node to satisfy an excessive memory request.

The table below contains information regarding the approximate limits of Ada memory hardware and our suggestions on its use.

Memory Limits of Nodes
64GB Nodes 256GB Nodes 192GB Nodes in v100 queue
Node Count 811 26 4
Number of Cores 20 Cores (2 sockets x 10 core) 24
Memory Limit
Per Core
2560 MB
2.5 GB
12400 MB
12 GB
7600 MB
7.4 GB
Memory Limit
Per Node
56000 MB
54 GB
248000 MB
242 GB
184000 MB
180 GB

LSF may queue your job for an excessive time (or indefinitely) if waiting for some particular nodes with sufficient memory to become free.