Hprc banner tamu.png

Difference between revisions of "HPRC:Systems"

From TAMU HPRC
Jump to: navigation, search
(Curie: An IBM Power7+ Cluster)
 
(75 intermediate revisions by 5 users not shown)
Line 1: Line 1:
=== Ada: An IBM (mostly) NeXtScale Cluster ===  
+
=== Terra: A Lenovo x86 HPC Cluster ===
 
----
 
----
[[Image:Ada_racks.jpg|right|400px|caption]]
+
[[Image:Terra-racks.jpg|right|400px]]
 
{| class="wikitable" style="text-align: center;"
 
{| class="wikitable" style="text-align: center;"
 
| System Name:
 
| System Name:
| Ada
+
| Terra
 
|-
 
|-
 
| Host Name:
 
| Host Name:
| ada.tamu.edu
+
| terra.tamu.edu
 
|-
 
|-
 
| Operating System:
 
| Operating System:
| Linux (CentOS 6.7)
+
| Linux (CentOS 7)
 
|-
 
|-
| Nodes/cores per node:
+
| Total Compute Cores/Nodes:
| 845/20-core @ 2.5 GHz IvyBridge; <br />15/40-core @ 2.26 GHz Westmere
+
| 8,512 cores<br>304 nodes
 
|-
 
|-
| Nodes with GPUs:
+
| Compute Nodes:
| 30 (2 Nvidia K20 GPUs/node)
+
| 256 compute nodes, each with 64GB RAM <br> 48 GPU nodes, each with one dual-GPU Tesla K80 accelerator and 128GB of RAM
|-
 
| Nodes with Phis:
 
| 9 (2 Phi coprocessors/node)
 
|-
 
| Memory size:
 
| 811 nodes with 64 GB (DDR3 1866 MHz);<br />34 nodes with 256 GB (DDR3 1866 MHz)
 
|-
 
| Extra-fat nodes:
 
| 11 nodes with 1TB (DDR 3 1066 MHz);<br />4 nodes with 2TB (DDR3 1066 MHz)
 
 
|-
 
|-
 
| Interconnect:
 
| Interconnect:
| FDR-10 Infiniband based on the <br />Mellanox SX6536 (core) and SX6036 (leaf) switches.
+
| Intel Omni-Path 100 Series switches.
 
|-
 
|-
 
| Peak Performance:
 
| Peak Performance:
| ~337 TFLOPs
+
| 326 TFLOPs
 
|-
 
|-
 
| Global Disk:
 
| Global Disk:
| 4 PB (raw) via IBM's GSS26 appliance
+
| 2PB (raw) via IBM/Lenovo's GSS26 appliances for general use <br>1PB (raw) via Lenovo's GSS24 appliance purchased by and dedicated for GEOSAT
 
|-
 
|-
 
| File System:
 
| File System:
Line 40: Line 31:
 
|-
 
|-
 
| Batch Facility:
 
| Batch Facility:
| Platform LSF
+
| [http://slurm.schedmd.com/ Slurm by SchedMD]
 
|-
 
|-
 
| Location:
 
| Location:
Line 46: Line 37:
 
|-
 
|-
 
| Production Date:
 
| Production Date:
| September 2014
+
| Spring 2017
 
|}
 
|}
  
Ada is a 17,500-core IBM commodity cluster with nodes based mostly on Intel's 64-bit 10-core IvyBridge processors. 20 of the nodes with GPUs have 256 GB of memory. Included in the 845 nodes are 8 login nodes with 256 GB of memory per node, 3 with 2 GPUs, and 3 with 2 Phi coprocessors.
+
Terra is a 8,512-core Lenovo commodity cluster. Each compute node has two [https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2_40-GHz Intel 64-bit 14-core Broadwell processors]. In addition to the 304 compute nodes, there are 3 login nodes (one with GPU), each with 128 GB of memory.  See the [[ Terra:Intro | Terra Intro Page]] for more information.
 +
 
 +
For a quick introduction to Terra and Slurm, see the [[:Terra:QuickStart | Terra Quick Start Guide]].
  
Get details on using this system, see the [[Ada | User Guide for Ada]].
+
Get details on using this system, see the [[Terra | User Guide for Terra]].
  
=== Eos: An IBM iDataplex Cluster ===
+
=== Ada: An IBM/Lenovo x86 HPC Cluster ===  
 
----
 
----
[[Image:eos_picture.jpg|right|400px|caption]]
+
[[Image:Ada_racks.jpg|right|400px|caption]]
 
{| class="wikitable" style="text-align: center;"
 
{| class="wikitable" style="text-align: center;"
 
| System Name:
 
| System Name:
| Eos
+
| Ada
 
|-
 
|-
 
| Host Name:
 
| Host Name:
| eos.tamu.edu
+
| ada.tamu.edu
 
|-
 
|-
 
| Operating System:
 
| Operating System:
| Linux (RedHat Enterprise Linux and CentOS)
+
| Linux (CentOS 6)
|-
 
| Nodes/Cores per node:
 
| 324/8-core @ 2.8GHz Nehalem <br> 48/12-core @ 2.8GHz Westmere
 
 
|-
 
|-
| Nodes with Fermi GPUs:
+
| Total Compute Cores/Nodes:
| 2 w 2 M2050 each <br> 2 w 1 M2070 each
+
| 17,340 cores<br>852 nodes
 
|-
 
|-
| Memory Sizes:
+
| Compute Nodes:
| 366 nodes with 24GB (DDR3 1333MHz) <br> 6 nodes with 48GB (DDR3 1066MHz)
+
| 792 compute nodes, each with 64GB RAM <br> 30 GPU nodes, each with dual GPUs and 64GB or 256GB RAM <br> 9 Phi nodes, each with dual Phi accelerators and 64GB RAM <br> 6 large memory compute nodes, each with 256GB RAM <br> 11 xlarge memory compute nodes, each with 1TB RAM <br> 4 xlarge memory compute nodes, each with 2TB RAM
 
|-
 
|-
 
| Interconnect:
 
| Interconnect:
| 4x QDR Infiniband <br> (Voltaire Grid Director GD4700 switch)
+
| FDR-10 Infiniband based on the <br>Mellanox SX6536 (core) and SX6036 (leaf) switches.
 
|-
 
|-
 
| Peak Performance:
 
| Peak Performance:
| 35.5 TFlops
+
| ~337 TFLOPs
 
|-
 
|-
| Total Disk:
+
| Global Disk:
| ~500 TB by a DDN S2A9900 RAID Array
+
| 4PB (raw) via IBM/Lenovo's GSS26 appliance
 
|-
 
|-
 
| File System:
 
| File System:
Line 88: Line 78:
 
|-
 
|-
 
| Batch Facility:
 
| Batch Facility:
| PBS/Torque/Maui
+
| Platform LSF
 
|-
 
|-
 
| Location:
 
| Location:
Line 94: Line 84:
 
|-
 
|-
 
| Production Date:
 
| Production Date:
| May 2010
+
| September 2014
 
|}
 
|}
  
Eos is a 3,168 core IBM &quot;iDataPlex&quot; commodity cluster with nodes based on Intel's 64-bit Nehalem &amp; Westmere processor. The cluster is composed of 6 head nodes, 4 storage nodes, and 362 compute nodes. The storage and compute nodes have 24 GB of DDR3 1333 MHz memory while the head nodes have 48 GB of DDR3 1066 MHz memory. A Voltaire Grid Director 4700 QDR IB switch provides the core switching infrastructure.
+
Ada is a 17,340-core IBM/Lenovo commodity cluster. Most of the compute nodes have two [https://ark.intel.com/products/75275/Intel-Xeon-Processor-E5-2670-v2-25M-Cache-2_50-GHz Intel 64-bit 10-core Ivy Bridge processors]. In addition to the 852 compute nodes, there are 8 login nodes, each with 256 GB of memory and GPUs or Phi coprocessors per node. See the [[ Ada:Intro | Ada Intro Page]] for more information.
  
Get details on using this system, see the [[Eos | User Guide for Eos]]. (For detailed technical information, [http://sc.tamu.edu/systems/eos/hardware.php click here].)
+
Get details on using this system, see the [[Ada | User Guide for Ada]].
 
+
<!--
=== Crick: An IBM Power7+ BigData Analytics Cluster ===
+
=== Crick: An IBM POWER7+ BigData Analytics Cluster ===
 
----
 
----
 
[[Image:Crick5.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]]
 
[[Image:Crick5.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]]
Line 112: Line 102:
 
|-
 
|-
 
| Operating System:
 
| Operating System:
| PowerLinux 6.5
+
| Linux (Red Hat Enterprise Linux 6)
 
|-
 
|-
 
| Nodes/Cores per node:
 
| Nodes/Cores per node:
| 25/16-core @ 4.2GHz Power7+   
+
| 23/16-core @ 4.2GHz POWER7+   
 
|-
 
|-
 
| Memory Sizes:
 
| Memory Sizes:
| 23 nodes with 265GB (DDR3 1066 MHz) <br> 2 nodes with 128GB (DDR3 1066 MHz)
+
| 23 nodes with 265GB RAM (DDR3 1066MHz)
 
|-
 
|-
 
| Interconnect Type:
 
| Interconnect Type:
| 10Gbps Ethernet
+
| 10 Gbps Ethernet
 
|-
 
|-
 
| Peak Performance:
 
| Peak Performance:
| ~13 TFlops
+
| ~13 TFLOPS
 
|-
 
|-
 
| Total Disk:
 
| Total Disk:
| ~377 TB
+
| ~377TB
 
|-
 
|-
 
| File System:
 
| File System:
Line 139: Line 129:
 
|}
 
|}
  
Crick is a 400-core IBM Power7+ BigData cluster with nodes based on IBM's 64-bit 16-core Power7+ processors. Included in the 25 nodes are 2 management nodes with 128 GB of memory per node, 1 BigSQL node with 256 GB of memory per node and 14 TB (raw) of storage, and 22 data nodes with 14 TB (raw) storage for GPFS-FPO and local caching. Crick is primarily used for big data analytics.
+
Crick is a 368-core IBM POWER7+ BigData cluster.  Each compute node has two IBM's 64-bit 8-core POWER7+ processors. Included in the 23 nodes are 1 BigSQL node with 256GB of memory per node and 14TB (raw) of storage and 22 data nodes with 14TB (raw) storage for GPFS-FPO and local caching. Crick is primarily used for big data analytics. In addition to these nodes are 2 login nodes with 128GB of memory per node,
  
 
Get details on using this system, see the [[Crick | User Guide for Crick]].
 
Get details on using this system, see the [[Crick | User Guide for Crick]].
  
=== Curie: An IBM Power7+ Cluster ===
+
 
 +
 
 +
=== Curie: An IBM POWER7+ HPC Cluster ===
 
----
 
----
 
[[Image:Curie1.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]]
 
[[Image:Curie1.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]]
Line 154: Line 146:
 
|-
 
|-
 
| Operating System:
 
| Operating System:
| Linux (RedHat Enterprise Linux 6.6)
+
| Linux (Red Hat Enterprise Linux 6)
 
|-
 
|-
 
| Nodes/Cores per node:
 
| Nodes/Cores per node:
| 50/16-core @ 4.2GHz Power7+
+
| 48/16-core @ 4.2GHz POWER7+
 
|-
 
|-
 
| Memory Size:
 
| Memory Size:
| 50 Nodes with 256 GB/Node (DDR3 1066 MHz)
+
| 48 nodes with 256GB RAM (DDR3 1066MHz)
 
|-
 
|-
 
| Interconnect Type:
 
| Interconnect Type:
Line 166: Line 158:
 
|-
 
|-
 
| Peak Performance:
 
| Peak Performance:
| ~26 TFlops
+
| ~26 TFLOPS
 
|-
 
|-
 
| Total Disk:
 
| Total Disk:
| 4PB (raw) via IBM's GSS26 appliance (shared with Ada)
+
| 4PB (raw) via IBM/Lenovo's GSS26 appliance (shared with Ada)
 
|-
 
|-
 
| File System:
 
| File System:
Line 184: Line 176:
 
|}
 
|}
  
Curie is an 800-core IBM Power7+ cluster with nodes based on IBM's 64-bit 16-core Power7+ processors. Included in the 50 nodes are 2 login node with 256 GB of memory per node. Curie's file system and batch scheduler are shared with Ada cluster.
+
'''Curie has been retired.''' It is an 768-core IBM Power7+ cluster.  Each compute node has two IBM 64-bit 8-core POWER7+ processors. In addition to the 48 nodes are 2 login nodes with 256GB of memory per node. Curie's file system and batch scheduler are shared with Ada cluster.
 
+
-->
Get details on using this system, see the [[Curie | User Guide for Curie]].
+
=== Lonestar5: A Cray x86 HPC Cluster ===
 
 
=== Neumann: An IBM BlueGene/Q (BG/Q) Cluster ===
 
 
----
 
----
[[Image:Neumann-BGQ.png|right|400px|x300px|upright|Neumann]]
+
[[Image:Lonestar5.jpg|right|400px|caption]]
 
{| class="wikitable" style="text-align: center;"
 
{| class="wikitable" style="text-align: center;"
 
| System Name:
 
| System Name:
| Neumann
+
| Lonestar 5
|-
 
| Host Name:
 
| neumann.tamu.edu
 
|-
 
| Operating System:
 
| Redhat Enterprise Linux 6.5 (login nodes) <br> [http://en.wikipedia.org/wiki/CNK_operating_system CNK (IBM's BG Compute Node Kernel)] <br> [http://en.wikipedia.org/wiki/INK_%28operating_system%29 INK (IBM's BG I/O Node Kernel)].
 
|-
 
| Number of Nodes:
 
| 2048 IBM BG/Q nodes
 
|-
 
| Number of Processing Cores:
 
| 32,736 [http://en.wikipedia.org/wiki/PowerPC_A2 PowerPC A2] (all@1.6GHz)
 
|-
 
| Total Memory:
 
| 32TB (2048 x 16GB)
 
|-
 
| Interconnect Type:
 
| Infiniband
 
|-
 
| Peak Performance:
 
| ~400 TFlops
 
|-
 
| Total Disk:
 
| ~2PB
 
|-
 
| File System:
 
| General Parallel File System (GPFS)
 
|-
 
| Batch Facility:
 
| Load Leveler
 
|-
 
| Location:
 
| IBM Campus, Rochester, Minnesota
 
|-
 
| Production Date:
 
| estimated August 2015
 
|}
 
 
 
Get details on using this system, see the [[Neumann | User Guide for Neumann]].
 
 
 
=== Lonestar ===
 
----
 
==== Lonestar4: A Dell Linux Cluster ====
 
[[Image:lonestar.jpg|right|400px|caption]]
 
{| class="wikitable" style="text-align: center;"
 
| System Name:
 
| Lonestar 4
 
 
|-
 
|-
 
| Host Names:
 
| Host Names:
| lonestar.tacc.utexas.edu<br>lslogin1.tacc.utexas.edu<br>lslogin2.tacc.utexas.edu
+
| ls5.tacc.utexas.edu
 
|-
 
|-
 
| Operating System:
 
| Operating System:
Line 249: Line 192:
 
|-
 
|-
 
| Number of Nodes:
 
| Number of Nodes:
| 1,888
+
| 1252 compute nodes, each with 24 cores and 64 GB of memory<br>8 large memory compute nodes, each with 32 cores and 512 GB of memory<br>2 large memory compute nodes, each with 20 cores and 1 TB of memory
 
|-
 
|-
| Number of Processing Cores:
+
| Memory Sizes:
| 22,656
+
| 1252 nodes with 64GB <br>8 nodes with 512GB <br> 2 nodes with 1TB
 
|-
 
|-
 
| Interconnect Type:
 
| Interconnect Type:
| Quad Data Rate (QDR) InfiniBand
+
| Intel [http://www.theregister.co.uk/2012/04/25/intel_cray_interconnect_followup/ (formerly Cray's)] Aries interconnect, Dragonfly topology
|-
 
| Total Memory:
 
| 44 TB
 
 
|-
 
|-
 
| Peak Performance:
 
| Peak Performance:
| 302 TFlops
+
| 1.25+ PFLOPS
 
|-
 
|-
 
| Total Disk:
 
| Total Disk:
| 276 TB (local), 1000 TB (global)
+
| 5.4PB raw DDN ([http://www.ddn.com/ DataDirect Networks]) RAID storage <br> (More  details to come.)
 
|-
 
|-
 
| File System:
 
| File System:
| Lustre
+
| [https://en.wikipedia.org/wiki/Lustre_%28file_system%29 Lustre]
 
|-
 
|-
 
| Batch Facility:
 
| Batch Facility:
| Load Leveler
+
| [http://slurm.schedmd.com/ Slurm Workload Manager]
 
|-
 
|-
 
| Location:
 
| Location:
Line 276: Line 216:
 
|-
 
|-
 
| Production Date:
 
| Production Date:
| 2011
+
| March 2016
 
|}
 
|}
  
The TACC Dell Linux Cluster contains 22,656 cores within 1,888 Dell PowerEdge M610 compute blades (nodes), 16 PowerEdge R610 compute-I/O server-nodes, and 2 PowerEdge M610 (3.3GHz) login nodes. Each compute node has 24GB of memory, and the login/development nodes have 16GB. The system storage includes a 1000TB parallel (SCRATCH) Lustre file system, and 276TB of local compute-node disk space (146GB/node). A QDR InfiniBand switch fabric interconnects the nodes (I/O and compute) through a fat-tree topology, with a point-to-point bandwidth of 40GB/sec (unidirectional speed).
+
LoneStar5 is the latest in a series of [https://www.tacc.utexas.edu/systems/lonestar LoneStar clusters hosted at TACC]. Jointly funded by the University of Texas System, Texas A&M University and Texas Tech University, it provides additional resources to TAMU researchers.
Compute nodes have two processors, each a Xeon 5680 series 3.33GHz hex-core processor with a 12MB unified L3 cache. Peak performance for the 12 cores is 160 GFLOPS. The new Westmere microprocessor (basically similar to the Nehalem processor family, but using 32nm technology) has the following features: hex-core, shared L3 cache per socket, Integrated Memory Controller, larger L1 caches, Macro Ops Fusion, double-speed integer units, Advanced Smart Cache, and new SSE4.2 instructions. The memory system has 3 channels and uses 1333 MHz DIMMS.
 
Technical information from [http://www.tacc.utexas.edu/resources/hpc/#lonestar TACC HPC Resources]. Get more information about using Lonestar in the [http://www.tacc.utexas.edu/user-services/user-guides/lonestar-user-guide TACC Dell Linux Cluster User Guide].
 
  
==== Lonestar5: A Cray XC40 Linux Cluster ====
+
* Sources:
[[Image:Lonestar5.jpg|right|400px|caption]]
+
** http://www.hpcwire.com/2015/07/20/cray-comes-back-to-tacc/
 +
** http://utsystem.edu/offices/health-affairs/utrc/news/2015/07/13/tacc-continues-legacy-lonestar-supercomputers
 +
** https://portal.tacc.utexas.edu/user-guides/lonestar5/transition
 +
 
 +
 
 +
'''Note:''' Effective on September 27, 2016, all users must authenticate using Multi-Factor Authentication (MFA) in order to access TACC resources. More information on our [[TACC:MFA  | TACC MFA page]].
 +
 
 +
=== HPRC Lab: Cluster Access Workstations ===
 +
----
 +
[[Image:SCC-3-web.jpg|right|350px|caption]]
 
{| class="wikitable" style="text-align: center;"
 
{| class="wikitable" style="text-align: center;"
 
| System Name:
 
| System Name:
| Lonestar 5
+
| HPRC Lab Workstations
 
|-
 
|-
 
| Host Names:
 
| Host Names:
| ls5.tacc.utexas.edu
+
| hprclab[0-7]@tamu.edu
 
|-
 
|-
 
| Operating System:
 
| Operating System:
| Linux
+
| Fedora 23 x86_64
 
|-
 
|-
| Number of Nodes:
+
| Processor:
| 1252 dual-socket E5-2690v3 12 core @ 2.6GHz <br>2 1TB large memory nodes<br>8 512GB large memory nodes<br>(More details to come.)
+
| Intel Core i7-4770S @ 3.10GHz (Blocker OAL) <br>Intel Core i7-4790S @ 3.20GHz (SCC OAL)
 
|-
 
|-
| Memory Sizes:
+
| Memory:
| 1252 nodes with 64GB <br>8 nodes with 512GB <br> 2 nodes with 1TB
+
| 8GB RAM
|-
 
| Interconnect Type:
 
| Intel [http://www.theregister.co.uk/2012/04/25/intel_cray_interconnect_followup/ (formerly Cray's)] Aries interconnect, Dragonfly topology
 
|-
 
| Peak Performance:
 
| 1.25+ PFlops
 
|-
 
| Total Disk:
 
| 5.4 PB raw DDN ([http://www.ddn.com/ DataDirect Networks]) RAID storage <br> (More  details to come.)
 
|-
 
| File System:
 
| [https://en.wikipedia.org/wiki/Lustre_%28file_system%29 Lustre]
 
|-
 
| Batch Facility:
 
| [http://slurm.schedmd.com/ Slurm Workload Manager]
 
 
|-
 
|-
 
| Location:
 
| Location:
| [http://www.tacc.utexas.edu/ TACC]
+
| [http://oal.tamu.edu/Lab-Locations#0-StudentComputingCenter(SCC) SCC OAL] <br>[http://oal.tamu.edu/Lab-Locations#0-Blocker Blocker OAL]
|-
 
| Production Date:
 
| March 2016
 
 
|}
 
|}
 +
The HPRC Lab Workstations serve as a point of access for our clusters and some software. Users may utilize these workstations to access the supercomputers. A limited suite of software is installed on the workstations for local usage.
  
LoneStar5 is the latest in a series of [https://www.tacc.utexas.edu/systems/lonestar LoneStar clusters hosted at TACC]. Jointly funded by the University of Texas System, Texas A&M University and Texas Tech University, it provides additional resources to TAMU researchers.  At present, it is scheduled for production in early 2016. (More details to come.)
+
Get details on using these workstations, see the [[HPRCLab | User Guide for HPRC Lab]].
  
* Sources:
+
'''Note:''' Access to the HPRC Lab workstations is available to all our active users, but you must first email us requesting access.
** http://www.hpcwire.com/2015/07/20/cray-comes-back-to-tacc/
+
 
** http://utsystem.edu/offices/health-affairs/utrc/news/2015/07/13/tacc-continues-legacy-lonestar-supercomputers
+
[[ Category:HPRC ]]
** https://portal.tacc.utexas.edu/user-guides/lonestar5/transition
 

Latest revision as of 12:27, 7 October 2020

Terra: A Lenovo x86 HPC Cluster


Terra-racks.jpg
System Name: Terra
Host Name: terra.tamu.edu
Operating System: Linux (CentOS 7)
Total Compute Cores/Nodes: 8,512 cores
304 nodes
Compute Nodes: 256 compute nodes, each with 64GB RAM
48 GPU nodes, each with one dual-GPU Tesla K80 accelerator and 128GB of RAM
Interconnect: Intel Omni-Path 100 Series switches.
Peak Performance: 326 TFLOPs
Global Disk: 2PB (raw) via IBM/Lenovo's GSS26 appliances for general use
1PB (raw) via Lenovo's GSS24 appliance purchased by and dedicated for GEOSAT
File System: General Parallel File System (GPFS)
Batch Facility: Slurm by SchedMD
Location: Teague Data Center
Production Date: Spring 2017

Terra is a 8,512-core Lenovo commodity cluster. Each compute node has two Intel 64-bit 14-core Broadwell processors. In addition to the 304 compute nodes, there are 3 login nodes (one with GPU), each with 128 GB of memory. See the Terra Intro Page for more information.

For a quick introduction to Terra and Slurm, see the Terra Quick Start Guide.

Get details on using this system, see the User Guide for Terra.

Ada: An IBM/Lenovo x86 HPC Cluster


caption
System Name: Ada
Host Name: ada.tamu.edu
Operating System: Linux (CentOS 6)
Total Compute Cores/Nodes: 17,340 cores
852 nodes
Compute Nodes: 792 compute nodes, each with 64GB RAM
30 GPU nodes, each with dual GPUs and 64GB or 256GB RAM
9 Phi nodes, each with dual Phi accelerators and 64GB RAM
6 large memory compute nodes, each with 256GB RAM
11 xlarge memory compute nodes, each with 1TB RAM
4 xlarge memory compute nodes, each with 2TB RAM
Interconnect: FDR-10 Infiniband based on the
Mellanox SX6536 (core) and SX6036 (leaf) switches.
Peak Performance: ~337 TFLOPs
Global Disk: 4PB (raw) via IBM/Lenovo's GSS26 appliance
File System: General Parallel File System (GPFS)
Batch Facility: Platform LSF
Location: Teague Data Center
Production Date: September 2014

Ada is a 17,340-core IBM/Lenovo commodity cluster. Most of the compute nodes have two Intel 64-bit 10-core Ivy Bridge processors. In addition to the 852 compute nodes, there are 8 login nodes, each with 256 GB of memory and GPUs or Phi coprocessors per node. See the Ada Intro Page for more information.

Get details on using this system, see the User Guide for Ada.

Lonestar5: A Cray x86 HPC Cluster


caption
System Name: Lonestar 5
Host Names: ls5.tacc.utexas.edu
Operating System: Linux
Number of Nodes: 1252 compute nodes, each with 24 cores and 64 GB of memory
8 large memory compute nodes, each with 32 cores and 512 GB of memory
2 large memory compute nodes, each with 20 cores and 1 TB of memory
Memory Sizes: 1252 nodes with 64GB
8 nodes with 512GB
2 nodes with 1TB
Interconnect Type: Intel (formerly Cray's) Aries interconnect, Dragonfly topology
Peak Performance: 1.25+ PFLOPS
Total Disk: 5.4PB raw DDN (DataDirect Networks) RAID storage
(More details to come.)
File System: Lustre
Batch Facility: Slurm Workload Manager
Location: TACC
Production Date: March 2016

LoneStar5 is the latest in a series of LoneStar clusters hosted at TACC. Jointly funded by the University of Texas System, Texas A&M University and Texas Tech University, it provides additional resources to TAMU researchers.


Note: Effective on September 27, 2016, all users must authenticate using Multi-Factor Authentication (MFA) in order to access TACC resources. More information on our TACC MFA page.

HPRC Lab: Cluster Access Workstations


caption
System Name: HPRC Lab Workstations
Host Names: hprclab[0-7]@tamu.edu
Operating System: Fedora 23 x86_64
Processor: Intel Core i7-4770S @ 3.10GHz (Blocker OAL)
Intel Core i7-4790S @ 3.20GHz (SCC OAL)
Memory: 8GB RAM
Location: SCC OAL
Blocker OAL

The HPRC Lab Workstations serve as a point of access for our clusters and some software. Users may utilize these workstations to access the supercomputers. A limited suite of software is installed on the workstations for local usage.

Get details on using these workstations, see the User Guide for HPRC Lab.

Note: Access to the HPRC Lab workstations is available to all our active users, but you must first email us requesting access.