Current Cluster Comparisons

Attribute Terra ViDal Grace FASTER Lonestar6 * ACES
Total SUs per Fiscal Year 84,376,320 9,811,200 391,186,560 100,915,200 12,800,000 104,138,880
Peak DP FLOPS 517 TF 67 TF 6.2 PF 1.2 PF 2.9 PF 1.6 PF **
Login Resources
# Nodes 3 N/A 5 4 3 2
# Cores 84 240 256 384 192
# Cores/Node 28 48 64 128 96
Node Configurations 2x 128GB
1x 128GB with 2 GPU
2x 384GB
1x 384GB with 1 A100 GPU
1x 384GB with 2 T4 GPUs
1x 384GB with 2 RTX 6000 GPUs
4x 256GB 3x 256GB 2x 512GB
Compute Resources
# Nodes 320 24 925 180 560 130
# Cores 9,632 1,120 44,656 11,520 73,984 11,888
# Cores/Node 304x 28/node
8x 68/node
8x 72/node
20x 40/node
4x 80/node
917x 48/node
8x 80/node
180x 64/node 578x 128/node 110x 96/node
1x 48/node
1x 128/node
18x 64/node
Node Configurations 256x 64GB
48x 128GB and 2 GPU
4x 192GB and 2 32GB V100
16x 96GB and 16GB MCDRAM (KNL)
16x 192GB
4x 192GB and 2 16GB V100
4x 1.5TB
800x 384GB
100x 384GB and 2 A100
9x 384GB and 2x RTX 6000
8x 384GB and 4x T4
15x 384GB and 2x A40
8x 3TB
180x 256GB
See below for Liqid fabric devices. ***
560x 256GB
18x 256GB and 2 A100
110x 512GB
17x 512GB
1x 768GB and 8 VE
1x 768GB with 16 classic IPUs
1x 512GB with 16 Bow2000 IPUs
See below for Liqid fabric devices. ****
Local Disk/Node 1x 1TB disk 1.6TB or 3.2TB NVMe disks 1x 1.6TB NVMe disk 1x 3.84TB NVMe disk 1x 144GB SDD disk 1x 1.6TB NVMe disk
Other
Interconnect Omni-Path Architecture 40Gb Ethernet HDR100 InfiniBand HDR100 InfiniBand HDR100 InfiniBand NDR200 InfiniBand
Global Storage 7.4 PB 2 PB 5.0 PB (Shared between Grace and FASTER) 8 PB 2.6 PB

* Lonestar6 is hosted at TACC

** Will be updated when remaining hardware has been released to market.

*** FASTER cluster accelerators on Liqid fabrics: 200x T4, 40x A100, 8x A10, 4x A30, 8x A40 GPUs

**** ACES cluster accelerators on Liqid fabrics: 30x H100 GPUs, 120x PVC GPUs, 4x A30 GPUs, 2 D5005 FPGAs, 2 IA-840F FPGAs, and 2 Nextsilicon accelerators