Hprc banner tamu.png

HPRC:Systems

From TAMU HPRC
Revision as of 12:27, 7 October 2020 by Phamminhtris (talk | contribs)
Jump to: navigation, search

Terra: A Lenovo x86 HPC Cluster


Terra-racks.jpg
System Name: Terra
Host Name: terra.tamu.edu
Operating System: Linux (CentOS 7)
Total Compute Cores/Nodes: 8,512 cores
304 nodes
Compute Nodes: 256 compute nodes, each with 64GB RAM
48 GPU nodes, each with one dual-GPU Tesla K80 accelerator and 128GB of RAM
Interconnect: Intel Omni-Path 100 Series switches.
Peak Performance: 326 TFLOPs
Global Disk: 2PB (raw) via IBM/Lenovo's GSS26 appliances for general use
1PB (raw) via Lenovo's GSS24 appliance purchased by and dedicated for GEOSAT
File System: General Parallel File System (GPFS)
Batch Facility: Slurm by SchedMD
Location: Teague Data Center
Production Date: Spring 2017

Terra is a 8,512-core Lenovo commodity cluster. Each compute node has two Intel 64-bit 14-core Broadwell processors. In addition to the 304 compute nodes, there are 3 login nodes (one with GPU), each with 128 GB of memory. See the Terra Intro Page for more information.

For a quick introduction to Terra and Slurm, see the Terra Quick Start Guide.

Get details on using this system, see the User Guide for Terra.

Ada: An IBM/Lenovo x86 HPC Cluster


caption
System Name: Ada
Host Name: ada.tamu.edu
Operating System: Linux (CentOS 6)
Total Compute Cores/Nodes: 17,340 cores
852 nodes
Compute Nodes: 792 compute nodes, each with 64GB RAM
30 GPU nodes, each with dual GPUs and 64GB or 256GB RAM
9 Phi nodes, each with dual Phi accelerators and 64GB RAM
6 large memory compute nodes, each with 256GB RAM
11 xlarge memory compute nodes, each with 1TB RAM
4 xlarge memory compute nodes, each with 2TB RAM
Interconnect: FDR-10 Infiniband based on the
Mellanox SX6536 (core) and SX6036 (leaf) switches.
Peak Performance: ~337 TFLOPs
Global Disk: 4PB (raw) via IBM/Lenovo's GSS26 appliance
File System: General Parallel File System (GPFS)
Batch Facility: Platform LSF
Location: Teague Data Center
Production Date: September 2014

Ada is a 17,340-core IBM/Lenovo commodity cluster. Most of the compute nodes have two Intel 64-bit 10-core Ivy Bridge processors. In addition to the 852 compute nodes, there are 8 login nodes, each with 256 GB of memory and GPUs or Phi coprocessors per node. See the Ada Intro Page for more information.

Get details on using this system, see the User Guide for Ada.

Lonestar5: A Cray x86 HPC Cluster


caption
System Name: Lonestar 5
Host Names: ls5.tacc.utexas.edu
Operating System: Linux
Number of Nodes: 1252 compute nodes, each with 24 cores and 64 GB of memory
8 large memory compute nodes, each with 32 cores and 512 GB of memory
2 large memory compute nodes, each with 20 cores and 1 TB of memory
Memory Sizes: 1252 nodes with 64GB
8 nodes with 512GB
2 nodes with 1TB
Interconnect Type: Intel (formerly Cray's) Aries interconnect, Dragonfly topology
Peak Performance: 1.25+ PFLOPS
Total Disk: 5.4PB raw DDN (DataDirect Networks) RAID storage
(More details to come.)
File System: Lustre
Batch Facility: Slurm Workload Manager
Location: TACC
Production Date: March 2016

LoneStar5 is the latest in a series of LoneStar clusters hosted at TACC. Jointly funded by the University of Texas System, Texas A&M University and Texas Tech University, it provides additional resources to TAMU researchers.


Note: Effective on September 27, 2016, all users must authenticate using Multi-Factor Authentication (MFA) in order to access TACC resources. More information on our TACC MFA page.

HPRC Lab: Cluster Access Workstations


caption
System Name: HPRC Lab Workstations
Host Names: hprclab[0-7]@tamu.edu
Operating System: Fedora 23 x86_64
Processor: Intel Core i7-4770S @ 3.10GHz (Blocker OAL)
Intel Core i7-4790S @ 3.20GHz (SCC OAL)
Memory: 8GB RAM
Location: SCC OAL
Blocker OAL

The HPRC Lab Workstations serve as a point of access for our clusters and some software. Users may utilize these workstations to access the supercomputers. A limited suite of software is installed on the workstations for local usage.

Get details on using these workstations, see the User Guide for HPRC Lab.

Note: Access to the HPRC Lab workstations is available to all our active users, but you must first email us requesting access.