Hprc banner tamu.png

HPRC:Systems

From TAMU HPRC
Revision as of 11:33, 19 September 2014 by J-perdue (talk | contribs) (eos: an IBM iDataplex Cluster)
Jump to: navigation, search

HPC Systems

IBM iDataplex Cluster | Dell Linux Cluster (TACC)

eos

an IBM iDataplex Cluster

System Name: Eos
Host Name: eos.tamu.edu
Operating System: Linux (RedHat Enterprise Linux and CentOS)
Number of Nodes: 372(324 8-way Nehalem- and 48 12-way Westmere-based)
Number of Nodes with Fermi GPUs: 4(2 w 2 M2050 each and 2 w 1 M2070 each)
Number of Processing Cores: 3168(all@2.8GHz)
Interconnect Type: 4x QDR Infiniband (Voltaire Grid Director GD4700 switch)
Total Memory: 9,056 GB
Peak Performance: 35.5 TFlops
Total Disk: ~500 TB by a DDN S2A9900 RAID Array
File System: GPFS
Production Date: May 2010

Eos is an IBM "iDataPlex" commodity cluster with nodes based on Intel's 64-bit Nehalem & Westmere processor. The cluster is composed of 6 head nodes, 4 storage nodes, and 362 compute nodes. The storage and compute nodes have 24 GB of DDR3 1333 MHz memory while the head nodes have 48 GB of DDR3 1066 MHz memory. A Voltaire Grid Director 4700 QDR IB switch provides the core switching infrastructure. Get details on using this system, see the User Guide for Eos (original). (For detailed technical information, click here.)


lonestar: a Dell Linux Cluster

(located at TACC)

System Name: Lonestar 4
Host Names: lonestar.tacc.utexas.edu
lslogin1.tacc.utexas.edu
lslogin2.tacc.utexas.edu
Operating System: Linux
Number of Nodes: 1,888
Number of Processing Cores: 22,656
Interconnect Type: Quad Data Rate (QDR) InfiniBand
Total Memory: 44 TB
Peak Performance: 302 TFlops
Total Disk: 276 TB (local), 1000 TB (global)
File System: Lustre
Batch Facility: Load Leveler
Production Date: 2011

The TACC Dell Linux Cluster contains 22,656 cores within 1,888 Dell PowerEdge M610 compute blades (nodes), 16 PowerEdge R610 compute-I/O server-nodes, and 2 PowerEdge M610 (3.3GHz) login nodes. Each compute node has 24GB of memory, and the login/development nodes have 16GB. The system storage includes a 1000TB parallel (SCRATCH) Lustre file system, and 276TB of local compute-node disk space (146GB/node). A QDR InfiniBand switch fabric interconnects the nodes (I/O and compute) through a fat-tree topology, with a point-to-point bandwidth of 40GB/sec (unidirectional speed). Compute nodes have two processors, each a Xeon 5680 series 3.33GHz hex-core processor with a 12MB unified L3 cache. Peak performance for the 12 cores is 160 GFLOPS. The new Westmere microprocessor (basically similar to the Nehalem processor family, but using 32nm technology) has the following features: hex-core, shared L3 cache per socket, Integrated Memory Controller, larger L1 caches, Macro Ops Fusion, double-speed integer units, Advanced Smart Cache, and new SSE4.2 instructions. The memory system has 3 channels and uses 1333 MHz DIMMS. Technical information from TACC HPC Resources. Get more information about using Lonestar in the TACC Dell Linux Cluster User Guide.