Current Cluster Comparisons

Attribute Grace FASTER Lonestar6 * ACES Launch ViDal 2.0
Total SUs per Fiscal Year 391,186,560 100,915,200 12,800,000 104,138,880 75,686,400 20,183,400
Peak DP FLOPS 6.3 PF 1.2 PF 2.9 PF 4.2 PF 436 TF 330 TF
Login Resources
# Nodes 5 4 3 2 2 N/A
# Cores 240 256 384 192 8640
# Cores/Node 48 64 128 96 32
Node Configurations 2x 384GB
1x 384GB with 1 A100 GPU
1x 384GB with 2 T4 GPUs
1x 384GB with 2 RTX 6000 GPUs
4x 256GB 3x 256GB 2x 512GB 1x 384 GB
1x 384 GB with 1 A30 GPU
Compute Resources
# Nodes 940 180 560 130 45 18
# Cores 45,376 11,520 73,984 11,888 8,640 2,304
# Cores/Node 932x 48/node
8x 80/node
180x 64/node 578x 128/node 110x 96/node
1x 48/node
1x 128/node
18x 64/node
45x 192/node 18x 128/node
Node Configurations 800x 384GB
100x 384GB and 2 A100
9x 384GB and 2x RTX 6000
8x 384GB and 4x T4
15x 384GB and 3x A40
8x 3TB
180x 256GB
See below for Liqid fabric devices. **
560x 256GB
18x 256GB and 2 A100
110x 512GB
17x 512GB
1x 768GB and 8 VE
1x 768GB with 16 classic IPUs
1x 512GB with 16 Bow2000 IPUs
See below for Liqid fabric devices. ***
35x 384 GB
10x 768 GB and 2 A30
10x 384GB
4x 384GB and 2 H100 NVL
4x 3.0TB
Local Disk/Node 1x 1.6TB NVMe disk 1x 3.84TB NVMe disk 1x 144GB SDD disk 1x 1.6TB NVMe disk 1x 200GB disk 1.92TB NVMe disk
Other
Interconnect HDR100 InfiniBand HDR100 InfiniBand HDR100 InfiniBand NDR200 InfiniBand HDR100 InfiniBand 100Gb Ethernet
Global Storage 8.6 PB (Shared between Grace and FASTER) 8 PB 2.6 PB 2.2 PB 3.3 PB

* Lonestar6 is hosted at TACC

** FASTER cluster accelerators on Liqid fabrics: 200x T4, 40x A100, 8x A10, 4x A30, 8x A40 GPUs

*** ACES cluster accelerators on Liqid fabrics: 30x H10d0 GPUs, 120x PVC GPUs, 4x A30 GPUs, 2 D5005 FPGAs, 2 IA-840F FPGAs, and 2 Nextsilicon accelerators