Skip to content

Hardware

FASTER: A Dell x86 HPC Cluster

caption

System Name:

FASTER

Host Name:

faster.hprc.tamu.edu

Operating System:

Rocky Linux 8

Total Compute Cores/Nodes:

11,520 cores
180 nodes

Compute Nodes:

180 64-core compute nodes, each with 256GB RAM

Composable GPUs:

200 T4 16GB GPUs
40 A100 40GB GPUs
8 A10 24GB GPUs
4 A30 24GB GPUs
8 A40 48GB GPUs

Interconnect:

Mellanox HDR100 InfiniBand (MPI and storage)
Liqid PCIe Gen4 (GPU composability)

Peak Performance:

1.2 PFLOPS

Global Disk:

5PB (usable) via DDN Lustre appliances

File System:

Lustre

Batch Facility:

Slurm by SchedMD

Location:

West Campus Data Center

Production Date:

2021

FASTER is a 184-node Intel cluster from Dell with an InfiniBand HDR-100 interconnect. A100 GPUs, A10 GPUs, A30 GPUs, A40 GPUs and T4 GPUs are distributed and composable via Liqid PCIe fabrics. All login and compute nodes are based on the Intel Xeon 8352Y Ice Lake processor.

Compute Nodes

Table 2: Details of Compute Nodes

Compute Nodes
Processor Type Intel Xeon 8352Y (Ice Lake)
Sockets per node 2
Cores per socket 32
Cores per Node 64
Hardware threads per core 2
Hardware threads per node 128
Clock rate 2.20GHz (3.40 GHz Max Turbo Frequency)
Memory 256 GB DDR4-3200
Cache 48 MB L3
Local Disk Space 3.84 TB NVMe (/tmp)

GPUs can be added to compute nodes on the fly by using the “gres” option in a Slurm script. A researcher can request up to 10 GPUs to create these CPU-GPU nodes. The following GPUs will be composable to the compute nodes.

  • 200 T4 16GB GPUs
  • 40 A100 40GB GPUs
  • 8 A10 24GB GPUs
  • 4 A30 24GB GPUs
  • 8 A40 48GB GPUs

Login Nodes

Table 3: Details of Login Nodes

Login Nodes

Hostnames

faster1.hprc.tamu.edu (TAMU)
faster2.hprc.tamu.edu (TAMU)
faster3.hprc.tamu.edu (ACCESS)

Processor Type

Intel Xeon 8352Y (Ice Lake)

Memory

256 GB DDR4-3200

Total Nodes

4

Cores/Node

64

Interconnect

InfiniBand HDR100

Local Disk Space

3.84 TB (/tmp)

Data Transfer Nodes

FASTER has two data transfer nodes that can be used to transfer data to FASTER via the Globus Connect web interface or Globus command line. Globus Connect Server v5.4 is installed on the data transfer nodes. One data transfer node is dedicated to XSEDE users and its collection is listed as “XSEDE TAMU FASTER”.

Back to top