Hprc banner tamu.png

Difference between revisions of "HPRC:Systems"

From TAMU HPRC
Jump to: navigation, search
m
(Crick: An IBM Power7+ BigData Analytics Cluster)
Line 103: Line 103:
 
=== Crick: An IBM Power7+ BigData Analytics Cluster ===
 
=== Crick: An IBM Power7+ BigData Analytics Cluster ===
 
----
 
----
[[Image:p7_placeholder_pic.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]]
+
[[Image:Crick5.medium.jpg|right|400px|x300px|upright|Crick/Curie (under construction)]]
 
{| class="wikitable" style="text-align: center;"
 
{| class="wikitable" style="text-align: center;"
 
| System Name:
 
| System Name:

Revision as of 10:33, 4 March 2016

Ada: An IBM (mostly) NeXtScale Cluster


caption
System Name: Ada
Host Name: ada.tamu.edu
Operating System: Linux (CentOS 6.7)
Nodes/cores per node: 845/20-core @ 2.5 GHz IvyBridge;
15/40-core @ 2.26 GHz Westmere
Nodes with GPUs: 30 (2 Nvidia K20 GPUs/node)
Nodes with Phis: 9 (2 Phi coprocessors/node)
Memory size: 811 nodes with 64 GB (DDR3 1866 MHz);
34 nodes with 256 GB (DDR3 1866 MHz)
Extra-fat nodes: 11 nodes with 1TB (DDR 3 1066 MHz);
4 nodes with 2TB (DDR3 1066 MHz)
Interconnect: FDR-10 Infiniband based on the
Mellanox SX6536 (core) and SX6036 (leaf) switches.
Peak Performance: ~337 TFLOPs
Global Disk: 4 PB (raw) via IBM's GSS26 appliance
File System: General Parallel File System (GPFS)
Batch Facility: Platform LSF
Location: Teague Data Center
Production Date: September 2014

Ada is a 17,500-core IBM commodity cluster with nodes based mostly on Intel's 64-bit 10-core IvyBridge processors. 20 of the nodes with GPUs have 256 GB of memory. Included in the 845 nodes are 8 login nodes with 256 GB of memory per node, 3 with 2 GPUs, and 3 with 2 Phi coprocessors.

Get details on using this system, see the User Guide for Ada.

Eos: An IBM iDataplex Cluster


caption
System Name: Eos
Host Name: eos.tamu.edu
Operating System: Linux (RedHat Enterprise Linux and CentOS)
Nodes/Cores per node: 324/8-core @ 2.8GHz Nehalem
48/12-core @ 2.8GHz Westmere
Nodes with Fermi GPUs: 2 w 2 M2050 each
2 w 1 M2070 each
Memory Sizes: 366 nodes with 24GB (DDR3 1333MHz)
6 nodes with 48GB (DDR3 1066MHz)
Interconnect: 4x QDR Infiniband
(Voltaire Grid Director GD4700 switch)
Peak Performance: 35.5 TFlops
Total Disk: ~500 TB by a DDN S2A9900 RAID Array
File System: General Parallel File System (GPFS)
Batch Facility: PBS/Torque/Maui
Location: Teague Data Center
Production Date: May 2010

Eos is a 3,168 core IBM "iDataPlex" commodity cluster with nodes based on Intel's 64-bit Nehalem & Westmere processor. The cluster is composed of 6 head nodes, 4 storage nodes, and 362 compute nodes. The storage and compute nodes have 24 GB of DDR3 1333 MHz memory while the head nodes have 48 GB of DDR3 1066 MHz memory. A Voltaire Grid Director 4700 QDR IB switch provides the core switching infrastructure.

Get details on using this system, see the User Guide for Eos. (For detailed technical information, click here.)

Crick: An IBM Power7+ BigData Analytics Cluster


Crick/Curie (under construction)
System Name: Crick
Host Name: crick.tamu.edu
Operating System: PowerLinux 6.5
Nodes/Cores per node: 25/16-core @ 4.2GHz Power7+
Memory Sizes: 23 nodes with 265GB (DDR3 1066 MHz)
2 nodes with 128GB (DDR3 1066 MHz)
Interconnect Type: 10Gbps Ethernet
Peak Performance: ~13 TFlops
Total Disk: ~377 TB
File System: GPFS's File Placement Optimizer (FPO) - IBM's HDFS alternative
Location: Wehner Data Center
Production Date: August 2015

Crick is a 400-core IBM Power7+ BigData cluster with nodes based on IBM's 64-bit 16-core Power7+ processors. Included in the 25 nodes are 2 management nodes with 128 GB of memory per node, 1 BigSQL node with 256 GB of memory per node and 14 TB (raw) of storage, and 22 data nodes with 14 TB (raw) storage for GPFS-FPO and local caching. Crick is primarily used for big data analytics.

Get details on using this system, see the User Guide for Crick.

Curie: An IBM Power7+ Cluster


Crick/Curie (under construction)
System Name: Curie
Host Name: curie.tamu.edu
Operating System: Linux (RedHat Enterprise Linux 6.6)
Nodes/Cores per node: 50/16-core @ 4.2GHz Power7+
Memory Size: 50 Nodes with 256 GB/Node (DDR3 1066 MHz)
Interconnect Type: 10Gbps Ethernet
Peak Performance: ~26 TFlops
Total Disk: 4PB (raw) via IBM's GSS26 appliance (shared with Ada)
File System: General Parallel File System (GPFS) (shared with Ada)
Batch Facility Platform LSF
Location Wehner Data Center
Production Date: May 2015

Curie is an 800-core IBM Power7+ cluster with nodes based on IBM's 64-bit 16-core Power7+ processors. Included in the 50 nodes are 2 login node with 256 GB of memory per node. Curie's file system and batch scheduler are shared with Ada cluster.

Get details on using this system, see the User Guide for Curie.

Neumann: An IBM BlueGene/Q (BG/Q) Cluster


Neumann
System Name: Neumann
Host Name: neumann.tamu.edu
Operating System: Redhat Enterprise Linux 6.5 (login nodes)
CNK (IBM's BG Compute Node Kernel)
INK (IBM's BG I/O Node Kernel).
Number of Nodes: 2048 IBM BG/Q nodes
Number of Processing Cores: 32,736 PowerPC A2 (all@1.6GHz)
Total Memory: 32TB (2048 x 16GB)
Interconnect Type: Infiniband
Peak Performance: ~400 TFlops
Total Disk: ~2PB
File System: General Parallel File System (GPFS)
Batch Facility: Load Leveler
Location: IBM Campus, Rochester, Minnesota
Production Date: estimated August 2015

Get details on using this system, see the User Guide for Neumann.

Lonestar


Lonestar4: A Dell Linux Cluster

caption
System Name: Lonestar 4
Host Names: lonestar.tacc.utexas.edu
lslogin1.tacc.utexas.edu
lslogin2.tacc.utexas.edu
Operating System: Linux
Number of Nodes: 1,888
Number of Processing Cores: 22,656
Interconnect Type: Quad Data Rate (QDR) InfiniBand
Total Memory: 44 TB
Peak Performance: 302 TFlops
Total Disk: 276 TB (local), 1000 TB (global)
File System: Lustre
Batch Facility: Load Leveler
Location: TACC
Production Date: 2011

The TACC Dell Linux Cluster contains 22,656 cores within 1,888 Dell PowerEdge M610 compute blades (nodes), 16 PowerEdge R610 compute-I/O server-nodes, and 2 PowerEdge M610 (3.3GHz) login nodes. Each compute node has 24GB of memory, and the login/development nodes have 16GB. The system storage includes a 1000TB parallel (SCRATCH) Lustre file system, and 276TB of local compute-node disk space (146GB/node). A QDR InfiniBand switch fabric interconnects the nodes (I/O and compute) through a fat-tree topology, with a point-to-point bandwidth of 40GB/sec (unidirectional speed). Compute nodes have two processors, each a Xeon 5680 series 3.33GHz hex-core processor with a 12MB unified L3 cache. Peak performance for the 12 cores is 160 GFLOPS. The new Westmere microprocessor (basically similar to the Nehalem processor family, but using 32nm technology) has the following features: hex-core, shared L3 cache per socket, Integrated Memory Controller, larger L1 caches, Macro Ops Fusion, double-speed integer units, Advanced Smart Cache, and new SSE4.2 instructions. The memory system has 3 channels and uses 1333 MHz DIMMS. Technical information from TACC HPC Resources. Get more information about using Lonestar in the TACC Dell Linux Cluster User Guide.

Lonestar5: A Cray XC40 Linux Cluster

caption
System Name: Lonestar 5
Host Names: ls5.tacc.utexas.edu
Operating System: Linux
Number of Nodes: 1252 dual-socket E5-2690v3 12 core @ 2.6GHz
2 1TB large memory nodes
8 512GB large memory nodes
(More details to come.)
Memory Sizes: 1252 nodes with 64GB
8 nodes with 512GB
2 nodes with 1TB
Interconnect Type: Intel (formerly Cray's) Aries interconnect, Dragonfly topology
Peak Performance: 1.25+ PFlops
Total Disk: 5.4 PB raw DDN (DataDirect Networks) RAID storage
(More details to come.)
File System: Lustre
Batch Facility: Slurm Workload Manager
Location: TACC
Production Date: March 2016

LoneStar5 is the latest in a series of LoneStar clusters hosted at TACC. Jointly funded by the University of Texas System, Texas A&M University and Texas Tech University, it provides additional resources to TAMU researchers. At present, it is scheduled for production in early 2016. (More details to come.)