Hprc banner tamu.png

SW:Toolchains

From TAMU HPRC
Revision as of 18:05, 14 February 2017 by Rodriguez.dylan (talk | contribs) (Currently Supported)

Jump to: navigation, search

Toolchains

Overview

A toolchain on a TAMU HPRC cluster is a collection of tools used for building software. They typically include:

  • a compiler collection providing basic language support (C/Fortran/C++)
  • a MPI implementation for multi-node communication
  • a set of linear algebra libraries (FFTW/BLAS/LAPACK) for accelerated math

Most modules on our clusters include the toolchain that was used to build them in their version tag (e.g. .Python/3.5.2-intel-2016b was built with the intel/2016b toolchain below).

Mixing components from different toolchains almost always leads to problems. For example, if you mix Intel MPI modules with OpenMPI modules you can guarantee your program will not run (even if you manage to get it to compile). We recommend you always use modules from the same (sub)toolchains. [Since late 2016 we've been looking at EasyBuild's --minimal-toolchain option to cut down on module count, so "GCCcore" is now a common new "subtoolchain"]

Currently Supported

Toolchain Compiler(s) MPI Linear Algebra Available on:
binutils GCC Intel CC Intel MPI OpenMPI Intel MKL LAPACK OpenBLAS ScaLAPACK FFTW ada curie terra
intel Composed of the Intel Cluster Toolkit built upon a particular version of GCC. These are the recommended toolchains for HPRC Intel-based clusters.
2017A - 6.3.0 2017.1.132 2017.1.132 - 2017.1.132 - - - - X - X
2016D 2.27 6.2.0 2017.1.132 2017.1.132 - 2017.1.132 - - - - X - X
2016C 2.26 5.4.0 2017.1.132 2017.1.132 - 2017.1.132 - - - - X - X
2016b 2.26 5.4.0 2016.3.210 5.1.3.181 - 11.3.3.210 - - - - X - X
2016a 2.25 4.9.3 2016.1.150 5.1.2.150 - 11.3.1.150 - - - - X - X
2015F 2.25 4.9.4 2015.7.235 5.1.3.223 - 11.3.4.258 - - - - - - X
2015B 2.2x (system) 4.8.4 2015.3.187 5.0.3.048 - 11.2.3.187 - - - - X - -
iomkl A combination of the Intel compiler and math kernel library components with OpenMPI.
2017A - 6.3.0 2017.1.132 - 2.0.2 2017.1.132 - - - - - - X
2016D 2.27 6.2.0 2017.1.132 - 2.0.1 2017.1.132 - - - - X - X
2016.07 2.26 5.4.0 2016.3.210 - 1.10.3 11.3.3.210 - - - - X - -
2015B 2.2x (system) 4.8.4 2015.3.187 - 1.8.8 11.2.3.187 - - - - X - -
foss Based upon "Free and Open-Source Software". These toolchains produce slower code but do provide code portability to other platforms (e.g. those that don't have an Intel compiler).
2017A - 6.3.0 - - 2.0.2 - 3.7.0 0.2.19 2.0.2 3.3.6 X X X
2016D 2.27 6.2.0 - - 2.0.1 - 3.6.1 0.2.19 2.0.2 3.3.5 X X X
2016b 2.26 5.4.0 - - 1.10.3 - 3.6.1 0.2.18 2.0.2 3.3.4 X X X
2016a 2.25 4.9.3 - - 1.10.2 - 3.6.0 0.2.15 2.0.2 3.3.4 - X -
2015b 2.25 4.9.3 - - 1.8.8 - 3.5.0 0.2.14 2.0.2 3.3.4 - X -
2015a 2.2x (system) 4.9.2 - - 1.8.4 - 3.5.0 0.2.13 2.0.2 3.3.4 - X -
  1. For details on using Intel MKL's BLAS and ScaLAPACK (and at some point FFTW), see our current Intel MKL page.
  2. Note: OpenMPI 1.10.x is largely incompatibale with all previous and later versions and should probably be avoided.

Others

In the past, we've offered different combinations including some that use the Portland Groups compilers and some that use different variants of MPI (e.g. MVAPICH, MPICH, MPICH2). If the need arises to build such toolchains in the future, we will consider it. But for now, users are recommended to use one of the toolchains above (preferably the most recent).