Difference between revisions of "HPRC:Systems"
(→Curie: An IBM POWER7 HPC Cluster) |
(→FASTER: A Dell x86 HPC Cluster) |
||
(103 intermediate revisions by 8 users not shown) | |||
Line 1: | Line 1: | ||
− | === | + | === FASTER: A Dell x86 HPC Cluster === |
---- | ---- | ||
− | [[Image: | + | [[Image:FASTER.jpeg|right|400px|caption]] |
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | | + | | FASTER |
|- | |- | ||
| Host Name: | | Host Name: | ||
− | | | + | | faster.tamu.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | Linux (CentOS | + | | Linux (CentOS 8) |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
− | | Nodes | + | | Total Compute Cores/Nodes: |
− | | | + | | 11,520 cores<br>180 nodes |
|- | |- | ||
− | | | + | | Compute Nodes: |
− | | | + | | 180 64-core compute nodes, each with 64GB RAM |
|- | |- | ||
− | | | + | | Composable GPUs: |
− | | | + | | 200 T4 16GB GPUs<br>40 A100 40GB GPUs<br>8 A10 24GB GPUs<br>4 A30 24GB GPUs<br>8 A40 48GB GPUs |
|- | |- | ||
| Interconnect: | | Interconnect: | ||
− | | | + | | Mellanox HDR100 InfiniBand (MPI and storage)<br>Liqid PCIe Gen4 (GPU composability) |
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | | + | | 1.2 PFLOPs |
|- | |- | ||
| Global Disk: | | Global Disk: | ||
− | | | + | | 5PB (usable) via DDN Lustre appliances |
|- | |- | ||
| File System: | | File System: | ||
− | | | + | | Lustre |
|- | |- | ||
| Batch Facility: | | Batch Facility: | ||
− | | | + | | [http://slurm.schedmd.com/ Slurm by SchedMD] |
|- | |- | ||
| Location: | | Location: | ||
− | | | + | | West Campus Data Center |
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | 2021 |
|} | |} | ||
− | + | FASTER is a 184-node Intel cluster from Dell with an InfiniBand HDR-100 interconnect. A100 GPUs, A10 GPUs, A30 GPUs, A40 GPUs, and T4 GPUs are distributed and composable via Liqid PCIe fabrics. All login and compute nodes are based on the [https://ark.intel.com/content/www/us/en/ark/products/212284/intel-xeon-platinum-8352y-processor-48m-cache-2-20-ghz.html Intel Xeon 8352Y Ice Lake processor]. See the [[ FASTER:Intro | FASTER Intro Page]] for more information. | |
− | Get details on using this system, see the [[ | + | For a quick introduction to FASTER and Slurm, see the [[FASTER:QuickStart | FASTER Quick Start Guide]]. |
+ | |||
+ | Get details on using this system, see the [[FASTER | User Guide for FASTER]]. | ||
− | === | + | === ACES: An innovative composable computational testbed === |
---- | ---- | ||
− | [[Image: | + | |
+ | {| class="wikitable" style="text-align: center;" | ||
+ | ! Component: | ||
+ | ! Quantity | ||
+ | ! Description | ||
+ | |- | ||
+ | | Graphcore IPU | ||
+ | | 16 | ||
+ | | style="text-align: left | 16 Colossus GC200 IPUs and dual AMD Rome CPU server on a 100 GbE RoCE fabric | ||
+ | |- | ||
+ | | Intel FPGA PAC D5005 | ||
+ | | 2 | ||
+ | | style="text-align: left | FPGA SOC with Intel Stratix 10 SX FPGAs, 64 bit quad-core Arm Cortex-A53 processors, and 32GB DDR4 | ||
+ | |- | ||
+ | | Intel Optane SSDs | ||
+ | | 8 | ||
+ | | style="text-align: left | 3 TB of Intel Optane SSDs addressable as memory using MemVerge Memory Machine. | ||
+ | |} | ||
+ | |||
+ | Available through the FASTER system. See the [[ACES | User Guide for ACES]] for more information. | ||
+ | |||
+ | |||
+ | === Grace: A Dell x86 HPC Cluster === | ||
+ | |||
+ | [[Image:Grace-racks.jpg|right|400px|caption]] | ||
+ | |||
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | | + | | Grace |
|- | |- | ||
| Host Name: | | Host Name: | ||
− | | | + | | grace.hprc.tamu.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | Linux ( | + | | Linux (CentOS 7) |
|- | |- | ||
− | | Nodes | + | | Total Compute Cores/Nodes: |
− | | | + | | 44,656 cores<br>925 nodes |
|- | |- | ||
− | | Nodes | + | | Compute Nodes: |
− | | | + | | 800 48-core compute nodes, each with 384GB RAM <br> 100 48-core GPU nodes, each with two A100 40GB GPUs and 384GB RAM <br>9 48-core GPU nodes, each with two RTX 6000 24GB GPUs and 384GB RAM<br>8 48-core GPU nodes, each with 4 T4 16GB GPUs<br> 8 80-core large memory nodes, each with 3TB RAM |
|- | |- | ||
− | + | | Interconnect: | |
− | + | | Mellanox HDR 100 InfiniBand | |
− | |||
− | | Interconnect | ||
− | |||
− | |||
− | |||
− | | | ||
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | | + | | 6.2 PFLOPS |
|- | |- | ||
− | | | + | | Global Disk: |
− | | | + | | 5PB (usable) via DDN Lustre appliances for general use <br>1.4PB (usable) via Lenovo DSS GPFS appliance (purchased by and dedicated for Dr. Junjie Zhang's CryoEM Lab)<br>1.9PB (usable) via Lenovo DSS GPFS appliance (purchased by and dedicated for Dr. Ping Chang's iHESP Lab) |
|- | |- | ||
| File System: | | File System: | ||
− | | | + | | Lustre and GPFS |
|- | |- | ||
| Batch Facility: | | Batch Facility: | ||
− | | | + | | [http://slurm.schedmd.com/ Slurm by SchedMD] |
|- | |- | ||
| Location: | | Location: | ||
− | | | + | | West Campus Data Center |
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | Spring 2021 |
|} | |} | ||
− | + | Grace is an Intel x86-64 Linux cluster with 925 compute nodes (44,656 total cores) and 5 login nodes. There are 800 compute nodes with 384 GB of memory, and 117 GPU nodes with 384 GB of memory. Among the 117 GPU nodes, there are 100 GPU nodes two A100 40 GB GPU cards, 9 GPU nodes with two RTX 6000 24GB GPU cards, 8 GPU nodes with four T4 16GB GPU cards. These 800 compute nodes and 117 GPU nodes are a dual socket server with two Intel 6248R 3.0GHz 24-core processors, commonly known as Cascade Lake. There are 8 compute nodes with 3 TB of memory and four Intel 6248 2.5 GHz 20-core processors. See the [[ Grace:Intro | Grace Intro Page]] for more information. | |
+ | |||
+ | For a quick introduction to Grace and Slurm, see the [[:Grace:QuickStart | Grace Quick Start Guide]]. | ||
+ | |||
+ | Get details on using this system, see the [[Grace | User Guide for Grace]]. | ||
− | |||
− | === | + | === Terra: A Lenovo x86 HPC Cluster === |
---- | ---- | ||
− | [[Image: | + | [[Image:Terra-racks.jpg|right|400px]] |
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | | + | | Terra |
|- | |- | ||
| Host Name: | | Host Name: | ||
− | | | + | | terra.tamu.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | | + | | Linux (CentOS 7) |
|- | |- | ||
− | | | + | | Total Compute Cores/Nodes: |
− | | | + | | 8,512 cores<br>304 nodes |
|- | |- | ||
− | | | + | | Compute Nodes: |
− | | | + | | 256 compute nodes, each with 64GB RAM <br> 48 GPU nodes, each with one dual-GPU Tesla K80 accelerator and 128GB of RAM |
|- | |- | ||
− | | Interconnect | + | | Interconnect: |
− | | | + | | Intel Omni-Path 100 Series switches. |
− | |||
− | |||
− | |||
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | | + | | 326 TFLOPs |
|- | |- | ||
− | | | + | | Global Disk: |
− | | | + | | 2PB (raw) via IBM/Lenovo's GSS26 appliances for general use <br>1PB (raw) via Lenovo's GSS24 appliance purchased by and dedicated for GEOSAT |
|- | |- | ||
| File System: | | File System: | ||
− | | | + | | General Parallel File System (GPFS) |
+ | |- | ||
+ | | Batch Facility: | ||
+ | | [http://slurm.schedmd.com/ Slurm by SchedMD] | ||
|- | |- | ||
| Location: | | Location: | ||
− | | | + | | Teague Data Center |
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | Spring 2017 |
|} | |} | ||
− | === | + | Terra is a 8,512-core Lenovo commodity cluster. Each compute node has two [https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz Intel 64-bit 14-core Broadwell processors]. In addition to the 304 compute nodes, there are 3 login nodes (one with GPU), each with 128 GB of memory. See the [[ Terra:Intro | Terra Intro Page]] for more information. |
+ | |||
+ | For a quick introduction to Terra and Slurm, see the [[:Terra:QuickStart | Terra Quick Start Guide]]. | ||
+ | |||
+ | Get details on using this system, see the [[Terra | User Guide for Terra]]. | ||
+ | |||
+ | |||
+ | === Ada: An IBM/Lenovo x86 HPC Cluster === | ||
---- | ---- | ||
− | [[Image: | + | [[Image:Ada_racks.jpg|right|400px|caption]] |
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | | + | | Ada |
|- | |- | ||
| Host Name: | | Host Name: | ||
− | | | + | | ada.tamu.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | Linux ( | + | | Linux (CentOS 6) |
|- | |- | ||
− | | | + | | Total Compute Cores/Nodes: |
− | | | + | | 17,340 cores<br>852 nodes |
|- | |- | ||
− | | | + | | Compute Nodes: |
− | | | + | | 792 compute nodes, each with 64GB RAM <br> 30 GPU nodes, each with dual GPUs and 64GB or 256GB RAM <br> 9 Phi nodes, each with dual Phi accelerators and 64GB RAM <br> 6 large memory compute nodes, each with 256GB RAM <br> 11 xlarge memory compute nodes, each with 1TB RAM <br> 4 xlarge memory compute nodes, each with 2TB RAM |
|- | |- | ||
− | | Interconnect | + | | Interconnect: |
− | | | + | | FDR-10 Infiniband based on the <br>Mellanox SX6536 (core) and SX6036 (leaf) switches. |
− | |||
− | |||
− | |||
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | ~ | + | | ~337 TFLOPs |
|- | |- | ||
− | | | + | | Global Disk: |
− | | 4PB (raw) via IBM's GSS26 appliance | + | | 4PB (raw) via IBM/Lenovo's GSS26 appliance |
|- | |- | ||
| File System: | | File System: | ||
− | | General Parallel File System (GPFS | + | | General Parallel File System (GPFS) |
|- | |- | ||
− | | Batch Facility | + | | Batch Facility: |
| Platform LSF | | Platform LSF | ||
|- | |- | ||
− | | Location | + | | Location: |
− | | | + | | Teague Data Center |
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | September 2014 |
|} | |} | ||
− | + | Ada is a 17,340-core IBM/Lenovo commodity cluster. Most of the compute nodes have two [https://ark.intel.com/products/75275/Intel-Xeon-Processor-E5-2670-v2-25M-Cache-2_50-GHz Intel 64-bit 10-core Ivy Bridge processors]. In addition to the 852 compute nodes, there are 8 login nodes, each with 256 GB of memory and GPUs or Phi coprocessors per node. See the [[ Ada:Intro | Ada Intro Page]] for more information. | |
− | Get details on using this system, see the User Guide for | + | Get details on using this system, see the [[Ada | User Guide for Ada]]. |
− | + | <!-- | |
− | === | + | === Crick: An IBM POWER7+ BigData Analytics Cluster === |
---- | ---- | ||
− | [[Image: | + | [[Image:Crick5.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]] |
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | | + | | Crick |
|- | |- | ||
| Host Name: | | Host Name: | ||
− | | | + | | crick.tamu.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | | + | | Linux (Red Hat Enterprise Linux 6) |
|- | |- | ||
− | | | + | | Nodes/Cores per node: |
− | | | + | | 23/16-core @ 4.2GHz POWER7+ |
|- | |- | ||
− | | | + | | Memory Sizes: |
− | | | + | | 23 nodes with 265GB RAM (DDR3 1066MHz) |
|- | |- | ||
| Interconnect Type: | | Interconnect Type: | ||
− | | | + | | 10 Gbps Ethernet |
− | |||
− | |||
− | |||
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | | + | | ~13 TFLOPS |
|- | |- | ||
| Total Disk: | | Total Disk: | ||
− | | ~ | + | | ~377TB |
|- | |- | ||
| File System: | | File System: | ||
− | | | + | | GPFS's File Placement Optimizer (FPO) - IBM's HDFS alternative |
− | |||
− | |||
− | |||
|- | |- | ||
| Location: | | Location: | ||
− | | | + | | Wehner Data Center |
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | August 2015 |
|} | |} | ||
− | === | + | Crick is a 368-core IBM POWER7+ BigData cluster. Each compute node has two IBM's 64-bit 8-core POWER7+ processors. Included in the 23 nodes are 1 BigSQL node with 256GB of memory per node and 14TB (raw) of storage and 22 data nodes with 14TB (raw) storage for GPFS-FPO and local caching. Crick is primarily used for big data analytics. In addition to these nodes are 2 login nodes with 128GB of memory per node, |
+ | |||
+ | Get details on using this system, see the [[Crick | User Guide for Crick]]. | ||
+ | |||
+ | |||
+ | |||
+ | === Curie: An IBM POWER7+ HPC Cluster === | ||
---- | ---- | ||
− | + | [[Image:Curie1.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]] | |
− | [[Image: | ||
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | | + | | Curie |
|- | |- | ||
− | | Host | + | | Host Name: |
− | | | + | | curie.tamu.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | Linux | + | | Linux (Red Hat Enterprise Linux 6) |
|- | |- | ||
− | | | + | | Nodes/Cores per node: |
− | | | + | | 48/16-core @ 4.2GHz POWER7+ |
|- | |- | ||
− | | | + | | Memory Size: |
− | | | + | | 48 nodes with 256GB RAM (DDR3 1066MHz) |
|- | |- | ||
| Interconnect Type: | | Interconnect Type: | ||
− | | | + | | 10Gbps Ethernet |
− | |||
− | |||
− | |||
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | | + | | ~26 TFLOPS |
|- | |- | ||
| Total Disk: | | Total Disk: | ||
− | | | + | | 4PB (raw) via IBM/Lenovo's GSS26 appliance (shared with Ada) |
|- | |- | ||
| File System: | | File System: | ||
− | | | + | | General Parallel File System (GPFS) (shared with Ada) |
|- | |- | ||
− | | Batch Facility | + | | Batch Facility |
− | | | + | | Platform LSF |
|- | |- | ||
− | | Location | + | | Location |
− | | | + | | Wehner Data Center |
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | May 2015 |
|} | |} | ||
− | + | '''Curie has been retired.''' It is an 768-core IBM Power7+ cluster. Each compute node has two IBM 64-bit 8-core POWER7+ processors. In addition to the 48 nodes are 2 login nodes with 256GB of memory per node. Curie's file system and batch scheduler are shared with Ada cluster. | |
− | + | --> | |
− | + | ||
− | === | + | === Lonestar6: A Dell x86 HPC Cluster === |
− | [[Image: | + | ---- |
+ | [[Image:Lonestar6.jpg|right|400px|caption]] | ||
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" | ||
| System Name: | | System Name: | ||
− | | Lonestar | + | | Lonestar 6 |
|- | |- | ||
| Host Names: | | Host Names: | ||
− | | | + | | ls6.tacc.utexas.edu |
|- | |- | ||
| Operating System: | | Operating System: | ||
− | | Linux | + | | Linux (Rocky 8.4) |
|- | |- | ||
| Number of Nodes: | | Number of Nodes: | ||
− | | | + | | 560 compute nodes, each with 128 cores<br>32 GPU nodes (same configuration as compute nodes), each with 3 NVIDIA 40GB A100 GPUs |
|- | |- | ||
− | | | + | | Memory Sizes: |
− | | | + | | 256 GB |
|- | |- | ||
| Interconnect Type: | | Interconnect Type: | ||
− | | | + | | Melanox HDR technology with full HDR (200 GB/s) connectivity |
− | |||
− | |||
− | |||
|- | |- | ||
| Peak Performance: | | Peak Performance: | ||
− | | | + | | 2.8 PFLOPS |
|- | |- | ||
| Total Disk: | | Total Disk: | ||
− | | | + | | 15 PB |
|- | |- | ||
| File System: | | File System: | ||
Line 326: | Line 342: | ||
|- | |- | ||
| Production Date: | | Production Date: | ||
− | | | + | | January 2022 |
|} | |} | ||
− | + | ||
+ | Lonestar6 is the latest in a series of [https://www.tacc.utexas.edu/systems/lonestar LoneStar clusters hosted at TACC]. Jointly funded by the University of Texas System, Texas A&M University, the University of North Texas, and Texas Tech University, it provides additional resources to TAMU researchers. | ||
* Sources: | * Sources: | ||
** http://www.hpcwire.com/2015/07/20/cray-comes-back-to-tacc/ | ** http://www.hpcwire.com/2015/07/20/cray-comes-back-to-tacc/ | ||
− | ** | + | ** https://portal.tacc.utexas.edu/user-guides/lonestar6 |
+ | |||
+ | |||
+ | |||
+ | '''Note:''' Effective on September 27, 2016, all users must authenticate using Multi-Factor Authentication (MFA) in order to access TACC resources. More information on our [[TACC:MFA | TACC MFA page]]. | ||
+ | |||
+ | |||
+ | [[ Category:HPRC ]] |
Latest revision as of 23:38, 4 December 2022
Contents
FASTER: A Dell x86 HPC Cluster
System Name: | FASTER |
Host Name: | faster.tamu.edu |
Operating System: | Linux (CentOS 8) |
Total Compute Cores/Nodes: | 11,520 cores 180 nodes |
Compute Nodes: | 180 64-core compute nodes, each with 64GB RAM |
Composable GPUs: | 200 T4 16GB GPUs 40 A100 40GB GPUs 8 A10 24GB GPUs 4 A30 24GB GPUs 8 A40 48GB GPUs |
Interconnect: | Mellanox HDR100 InfiniBand (MPI and storage) Liqid PCIe Gen4 (GPU composability) |
Peak Performance: | 1.2 PFLOPs |
Global Disk: | 5PB (usable) via DDN Lustre appliances |
File System: | Lustre |
Batch Facility: | Slurm by SchedMD |
Location: | West Campus Data Center |
Production Date: | 2021 |
FASTER is a 184-node Intel cluster from Dell with an InfiniBand HDR-100 interconnect. A100 GPUs, A10 GPUs, A30 GPUs, A40 GPUs, and T4 GPUs are distributed and composable via Liqid PCIe fabrics. All login and compute nodes are based on the Intel Xeon 8352Y Ice Lake processor. See the FASTER Intro Page for more information.
For a quick introduction to FASTER and Slurm, see the FASTER Quick Start Guide.
Get details on using this system, see the User Guide for FASTER.
ACES: An innovative composable computational testbed
Component: | Quantity | Description |
---|---|---|
Graphcore IPU | 16 | 16 Colossus GC200 IPUs and dual AMD Rome CPU server on a 100 GbE RoCE fabric |
Intel FPGA PAC D5005 | 2 | FPGA SOC with Intel Stratix 10 SX FPGAs, 64 bit quad-core Arm Cortex-A53 processors, and 32GB DDR4 |
Intel Optane SSDs | 8 | 3 TB of Intel Optane SSDs addressable as memory using MemVerge Memory Machine. |
Available through the FASTER system. See the User Guide for ACES for more information.
Grace: A Dell x86 HPC Cluster
System Name: | Grace |
Host Name: | grace.hprc.tamu.edu |
Operating System: | Linux (CentOS 7) |
Total Compute Cores/Nodes: | 44,656 cores 925 nodes |
Compute Nodes: | 800 48-core compute nodes, each with 384GB RAM 100 48-core GPU nodes, each with two A100 40GB GPUs and 384GB RAM 9 48-core GPU nodes, each with two RTX 6000 24GB GPUs and 384GB RAM 8 48-core GPU nodes, each with 4 T4 16GB GPUs 8 80-core large memory nodes, each with 3TB RAM |
Interconnect: | Mellanox HDR 100 InfiniBand |
Peak Performance: | 6.2 PFLOPS |
Global Disk: | 5PB (usable) via DDN Lustre appliances for general use 1.4PB (usable) via Lenovo DSS GPFS appliance (purchased by and dedicated for Dr. Junjie Zhang's CryoEM Lab) 1.9PB (usable) via Lenovo DSS GPFS appliance (purchased by and dedicated for Dr. Ping Chang's iHESP Lab) |
File System: | Lustre and GPFS |
Batch Facility: | Slurm by SchedMD |
Location: | West Campus Data Center |
Production Date: | Spring 2021 |
Grace is an Intel x86-64 Linux cluster with 925 compute nodes (44,656 total cores) and 5 login nodes. There are 800 compute nodes with 384 GB of memory, and 117 GPU nodes with 384 GB of memory. Among the 117 GPU nodes, there are 100 GPU nodes two A100 40 GB GPU cards, 9 GPU nodes with two RTX 6000 24GB GPU cards, 8 GPU nodes with four T4 16GB GPU cards. These 800 compute nodes and 117 GPU nodes are a dual socket server with two Intel 6248R 3.0GHz 24-core processors, commonly known as Cascade Lake. There are 8 compute nodes with 3 TB of memory and four Intel 6248 2.5 GHz 20-core processors. See the Grace Intro Page for more information.
For a quick introduction to Grace and Slurm, see the Grace Quick Start Guide.
Get details on using this system, see the User Guide for Grace.
Terra: A Lenovo x86 HPC Cluster
System Name: | Terra |
Host Name: | terra.tamu.edu |
Operating System: | Linux (CentOS 7) |
Total Compute Cores/Nodes: | 8,512 cores 304 nodes |
Compute Nodes: | 256 compute nodes, each with 64GB RAM 48 GPU nodes, each with one dual-GPU Tesla K80 accelerator and 128GB of RAM |
Interconnect: | Intel Omni-Path 100 Series switches. |
Peak Performance: | 326 TFLOPs |
Global Disk: | 2PB (raw) via IBM/Lenovo's GSS26 appliances for general use 1PB (raw) via Lenovo's GSS24 appliance purchased by and dedicated for GEOSAT |
File System: | General Parallel File System (GPFS) |
Batch Facility: | Slurm by SchedMD |
Location: | Teague Data Center |
Production Date: | Spring 2017 |
Terra is a 8,512-core Lenovo commodity cluster. Each compute node has two Intel 64-bit 14-core Broadwell processors. In addition to the 304 compute nodes, there are 3 login nodes (one with GPU), each with 128 GB of memory. See the Terra Intro Page for more information.
For a quick introduction to Terra and Slurm, see the Terra Quick Start Guide.
Get details on using this system, see the User Guide for Terra.
Ada: An IBM/Lenovo x86 HPC Cluster
System Name: | Ada |
Host Name: | ada.tamu.edu |
Operating System: | Linux (CentOS 6) |
Total Compute Cores/Nodes: | 17,340 cores 852 nodes |
Compute Nodes: | 792 compute nodes, each with 64GB RAM 30 GPU nodes, each with dual GPUs and 64GB or 256GB RAM 9 Phi nodes, each with dual Phi accelerators and 64GB RAM 6 large memory compute nodes, each with 256GB RAM 11 xlarge memory compute nodes, each with 1TB RAM 4 xlarge memory compute nodes, each with 2TB RAM |
Interconnect: | FDR-10 Infiniband based on the Mellanox SX6536 (core) and SX6036 (leaf) switches. |
Peak Performance: | ~337 TFLOPs |
Global Disk: | 4PB (raw) via IBM/Lenovo's GSS26 appliance |
File System: | General Parallel File System (GPFS) |
Batch Facility: | Platform LSF |
Location: | Teague Data Center |
Production Date: | September 2014 |
Ada is a 17,340-core IBM/Lenovo commodity cluster. Most of the compute nodes have two Intel 64-bit 10-core Ivy Bridge processors. In addition to the 852 compute nodes, there are 8 login nodes, each with 256 GB of memory and GPUs or Phi coprocessors per node. See the Ada Intro Page for more information.
Get details on using this system, see the User Guide for Ada.
Lonestar6: A Dell x86 HPC Cluster
System Name: | Lonestar 6 |
Host Names: | ls6.tacc.utexas.edu |
Operating System: | Linux (Rocky 8.4) |
Number of Nodes: | 560 compute nodes, each with 128 cores 32 GPU nodes (same configuration as compute nodes), each with 3 NVIDIA 40GB A100 GPUs |
Memory Sizes: | 256 GB |
Interconnect Type: | Melanox HDR technology with full HDR (200 GB/s) connectivity |
Peak Performance: | 2.8 PFLOPS |
Total Disk: | 15 PB |
File System: | Lustre |
Batch Facility: | Slurm Workload Manager |
Location: | TACC |
Production Date: | January 2022 |
Lonestar6 is the latest in a series of LoneStar clusters hosted at TACC. Jointly funded by the University of Texas System, Texas A&M University, the University of North Texas, and Texas Tech University, it provides additional resources to TAMU researchers.
- Sources:
Note: Effective on September 27, 2016, all users must authenticate using Multi-Factor Authentication (MFA) in order to access TACC resources. More information on our TACC MFA page.