Difference between revisions of "HPRC:Systems"
(→Ada: An IBM (mostly) NeXtScale Cluster) |
(→Eos: An IBM iDataplex Cluster) |
||
Line 54: | Line 54: | ||
=== Eos: An IBM iDataplex Cluster === | === Eos: An IBM iDataplex Cluster === | ||
+ | ---- | ||
[[Image:eos_picture.jpg|right|400px|caption]] | [[Image:eos_picture.jpg|right|400px|caption]] | ||
{| class="wikitable" style="text-align: center;" | {| class="wikitable" style="text-align: center;" |
Revision as of 11:40, 3 March 2016
Contents
Ada: An IBM (mostly) NeXtScale Cluster
System Name: | Ada |
Host Name: | ada.tamu.edu |
Operating System: | Linux (CentOS 6.7) |
Nodes/cores per node: | 845/20-core @ 2.5 GHz IvyBridge; 15/40-core @ 2.26 GHz Westmere |
Nodes with GPUs: | 30 (2 Nvidia K20 GPUs/node) |
Nodes with Phis: | 9 (2 Phi coprocessors/node) |
Memory size: | 811 nodes with 64 GB (DDR3 1866 MHz); 34 nodes with 256 GB (DDR3 1866 MHz) |
Extra-fat nodes: | 11 nodes with 1TB (DDR 3 1066 MHz); 4 nodes with 2TB (DDR3 1066 MHz) |
Interconnect: | FDR-10 Infiniband based on the Mellanox SX6536 (core) and SX6036 (leaf) switches. |
Peak Performance: | ~337 TFLOPs |
Global Disk: | 4 PB (raw) via IBM's GSS26 appliance |
File System: | General Parallel File System (GPFS) |
Batch Facility: | Platform LSF |
Location: | Teague Data Center |
Production Date: | September 2014 |
Ada is a 17,500-core IBM commodity cluster with nodes based mostly on Intel's 64-bit 10-core IvyBridge processors. 20 of the nodes with GPUs have 256 GB of memory. Included in the 845 nodes are 8 login nodes with 256 GB of memory per node, 3 with 2 GPUs, and 3 with 2 Phi coprocessors.
Get details on using this system, see the User Guide for Ada.
Eos: An IBM iDataplex Cluster
System Name: | Eos |
Host Name: | eos.tamu.edu |
Operating System: | Linux (RedHat Enterprise Linux and CentOS) |
Nodes/Cores per node: | 324/8-core Nehalem 48/12-core Westmere-based |
Nodes with Fermi GPUs: | 2 w 2 M2050 each 2 w 1 M2070 each |
Number of Processing Cores: | 3168 cores @ 2.8GHz |
Interconnect Type: | 4x QDR Infiniband (Voltaire Grid Director GD4700 switch) |
Total Memory: | 9,056 GB |
Peak Performance: | 35.5 TFlops |
Total Disk: | ~500 TB by a DDN S2A9900 RAID Array |
File System: | General Parallel File System (GPFS) |
Batch Facility: | PBS/Torque/Maui |
Location: | Teague Data Center |
Production Date: | May 2010 |
Eos is an IBM "iDataPlex" commodity cluster with nodes based on Intel's 64-bit Nehalem & Westmere processor. The cluster is composed of 6 head nodes, 4 storage nodes, and 362 compute nodes. The storage and compute nodes have 24 GB of DDR3 1333 MHz memory while the head nodes have 48 GB of DDR3 1066 MHz memory. A Voltaire Grid Director 4700 QDR IB switch provides the core switching infrastructure.
Get details on using this system, see the User Guide for Eos. (For detailed technical information, click here.)
Crick: An IBM POWER7 BigInsights (Hadoop) Cluster
System Name: | Crick |
Host Name: | crick.tamu.edu |
Operating System: | PowerLinux 6.5 |
Number of Nodes: | 23 IBM PowerLinux 7R2 servers |
Number of Processing Cores: | 368 (all@4.2GHz) |
Interconnect Type: | 10GB Ethernet |
Total Memory: | 5.75TB (23 x 256GB)??? |
Peak Performance: | ??? TFlops (23 nodes x 16 cores x 4.2 Gcycles/sec x X FLOPs/cycle) |
Total Disk: | 377 TB (23 nodes x 28 disk/node x 600 gb/disk) |
File System: | GPFS's File Placement Optimizer (FPO) - IBM's HDFS alternative |
Location: | Wehner Data Center |
Production Date: | estimated Q4 2014 |
Curie: An IBM POWER7 HPC Cluster
System Name: | Curie |
Host Name: | curie.tamu.edu |
Operating System: | PowerLinux 6.5 |
Number of Nodes: | 48 IBM PowerLinux 7R2 servers |
Number of Processing Cores: | 768 (all@4.2GHz) |
Interconnect Type: | 10GB Ethernet |
Total Memory: | 12TB (48 x 256GB) |
Peak Performance: | ??? TFlops (48 nodes x 16 cores x 4.2 Gcycles/sec x X FLOP/cycle) |
Total Disk: | see Ada above |
File System: | General Parallel File System (GPFS) |
Batch Facility | Platform LSF |
Location | Wehner Data Center |
Production Date: | estimated Q4 2014 |
Neumann: An IBM BlueGene/Q (BG/Q) Cluster
System Name: | Neumann |
Host Name: | neumann.tamu.edu |
Operating System: | Redhat Enterprise Linux 6.5 (login nodes) CNK (IBM's BG Compute Node Kernel) INK (IBM's BG I/O Node Kernel). |
Number of Nodes: | 2048 IBM BG/Q nodes |
Number of Processing Cores: | 32,736 PowerPC A2 (all@1.6GHz) |
Interconnect Type: | Infiniband |
Total Memory: | 32TB (2048 x 16GB) |
Peak Performance: | ??? TFlops (2048 nodes x 16 cores x 1.6 Gcycles/sec x X FLOP/cycle) |
Total Disk: | ~2PB |
File System: | General Parallel File System (GPFS) |
Batch Facility: | Load Leveler |
Location: | IBM Campus, Rochester, Minnesota |
Production Date: | estimated August 2015 |
Lonestar
Lonestar4: A Dell Linux Cluster
System Name: | Lonestar 4 |
Host Names: | lonestar.tacc.utexas.edu lslogin1.tacc.utexas.edu lslogin2.tacc.utexas.edu |
Operating System: | Linux |
Number of Nodes: | 1,888 |
Number of Processing Cores: | 22,656 |
Interconnect Type: | Quad Data Rate (QDR) InfiniBand |
Total Memory: | 44 TB |
Peak Performance: | 302 TFlops |
Total Disk: | 276 TB (local), 1000 TB (global) |
File System: | Lustre |
Batch Facility: | Load Leveler |
Location: | TACC |
Production Date: | 2011 |
The TACC Dell Linux Cluster contains 22,656 cores within 1,888 Dell PowerEdge M610 compute blades (nodes), 16 PowerEdge R610 compute-I/O server-nodes, and 2 PowerEdge M610 (3.3GHz) login nodes. Each compute node has 24GB of memory, and the login/development nodes have 16GB. The system storage includes a 1000TB parallel (SCRATCH) Lustre file system, and 276TB of local compute-node disk space (146GB/node). A QDR InfiniBand switch fabric interconnects the nodes (I/O and compute) through a fat-tree topology, with a point-to-point bandwidth of 40GB/sec (unidirectional speed). Compute nodes have two processors, each a Xeon 5680 series 3.33GHz hex-core processor with a 12MB unified L3 cache. Peak performance for the 12 cores is 160 GFLOPS. The new Westmere microprocessor (basically similar to the Nehalem processor family, but using 32nm technology) has the following features: hex-core, shared L3 cache per socket, Integrated Memory Controller, larger L1 caches, Macro Ops Fusion, double-speed integer units, Advanced Smart Cache, and new SSE4.2 instructions. The memory system has 3 channels and uses 1333 MHz DIMMS. Technical information from TACC HPC Resources. Get more information about using Lonestar in the TACC Dell Linux Cluster User Guide.
Lonestar5: A Cray XC40 Linux Cluster
System Name: | Lonestar 5 |
Host Names: | TBA |
Operating System: | Linux |
Number of Nodes: | 1252 dual-socket (E5-2690v3 12 core 2.6GHz) nodes with 64GB 2 1TB large memory nodes 8 512GB large memory nodes (more details to come) |
Number of Processing Cores: | 30,000+ |
Interconnect Type: | Intel (formerly Cray's) Aries interconnect, Dragonfly topology |
Total Memory: | 86.25TB (88320GB = 1252 * 64GB + 2 * 2TB + 8 * 512GB) |
Peak Performance: | 1.25+ PFlops |
Total Disk: | 5.4 PB (???) raw DDN (DataDirect Networks) RAID storage |
File System: | Lustre |
Batch Facility: | Slurm Workload Manager |
Location: | TACC |
Production Date: | 2016??? |
LoneStar5 is the latest in a series of LoneStar clusters hosted at TACC. Jointly funded by the University of Texas System, Texas A&M University and Texas Tech University, it provides additional resources to TAMU researchers. At present, it is scheduled for production in early 2016.