Hprc banner tamu.png

SW:Toolchains

From TAMU HPRC
Revision as of 20:02, 1 September 2018 by J-perdue (talk | contribs) (2018a)

Jump to: navigation, search

Toolchains

Overview

A toolchain on a TAMU HPRC cluster is a collection of tools used for building software. They typically include:

  • a compiler collection providing basic language support (C/Fortran/C++)
  • a MPI implementation for multi-node communication
  • a set of linear algebra libraries (FFTW/BLAS/LAPACK) for accelerated math

Most modules on our clusters include the toolchain that was used to build them in their version tag (e.g. .Python/3.5.2-intel-2016b was built with the intel/2016b toolchain below).

Mixing components from different toolchains almost always leads to problems. For example, if you mix Intel MPI modules with OpenMPI modules you can guarantee your program will not run (even if you manage to get it to compile). We recommend you always use modules from the same (sub)toolchains. [Since late 2016 we've been looking at EasyBuild's --minimal-toolchain option to cut down on module count, so "GCCcore" is now a common new "subtoolchain"]

Currently Supported

Toolchain Compiler(s) MPI Linear Algebra Available on:
binutils GCC(core) Intel Compilers
(iccifort)
Intel MPI
(impi)
OpenMPI Intel MKL
(imkl)
LAPACK OpenBLAS ScaLAPACK FFTW ada curie terra
intel Composed of the Intel Cluster Toolkit built upon a particular version of GCC. These are the recommended toolchains for HPRC Intel-based clusters.
2017b 2.28 6.4.0 2017.4.196 2017.3.196 - 2017.3.196 - - - - X - X
2017A 2.2x (system) 6.3.0 2017.1.132 2017.1.132 - 2017.1.132 - - - - X - X
2016b 2.26 5.4.0 2016.3.210 5.1.3.181 - 11.3.3.210 - - - - X - X
2016a 2.25 4.9.3 2016.1.150 5.1.2.150 - 11.3.1.150 - - - - X - X
2015B 2.2x (system) 4.8.4 2015.3.187 5.0.3.048 - 11.2.3.187 - - - - X - -
iomkl A combination of the Intel compiler and math kernel library components with OpenMPI.
2017b 2.28 6.4.0 2017.4.196 - 2.1.1 2017.3.196 - - - - X - X
2017A 2.2x (system) 6.3.0 2017.1.132 - 2.0.2 2017.1.132 - - - - X - X
2016.07 2.26 5.4.0 2016.3.210 - 1.10.3 11.3.3.210 - - - - X - -
2015B 2.2x (system) 4.8.4 2015.3.187 - 1.8.8 11.2.3.187 - - - - X - -
foss Based upon "Free and Open-Source Software". These toolchains produce slightly slower code but do provide code portability to other platforms (e.g. those that don't have an Intel compiler).
2017b 2.28 6.4.0 - - 2.1.1 - 3.7.0 0.2.20 2.0.2 3.3.6 X X X
2017A 2.2x (system) 6.3.0 - - 2.0.2 - 3.7.0 0.2.19 2.0.2 3.3.6 X X X
2016b 2.26 5.4.0 - - 1.10.3 - 3.6.1 0.2.18 2.0.2 3.3.4 X X X
2016a 2.25 4.9.3 - - 1.10.2 - 3.6.0 0.2.15 2.0.2 3.3.4 - X -
2015b 2.25 4.9.3 - - 1.8.8 - 3.5.0 0.2.14 2.0.2 3.3.4 - X -
2015a 2.2x (system) 4.9.2 - - 1.8.4 - 3.5.0 0.2.13 2.0.2 3.3.4 - X -
  1. For details on using Intel MKL's BLAS and ScaLAPACK (and at some point FFTW), see our current Intel MKL page.
  2. Note: OpenMPI 1.10.x is largely incompatibale with all previous and later versions and should probably be avoided.

Future (not yet supported)

2018a

binutils/2.28 CUDA/9.1.85 FFTW/3.3.7 GCC/6.4.0 icc/2018.1.163 ifort/2018.1.163 imkl/2018.1.163 impi/2018.1.163 OpenBLAS/0.2.20-GCC-6.4.0-2.28 OpenMPI/2.1.2-gcccuda-2018a ScaLAPACK/2.0.2-gompic-2018a-OpenBLAS-0.2.20

2018b

Our recommended toolchains - 2017A

Last updated: May 25, 2018

A breakdown of our 2017A toolchains

For the past few years, TAMU HPRC has been using EasyBuild to deploy commonly used HPC libraries. In the past we've tried to use what EasyBuild contributors (who have HPC systems of their own) use/deploy as a guide for software deployment. For 2017A, we've set off on a new path.

With the recent addition of the --minimal-toolchain option, which allows us to minimize the overall module count and share common modules among many toolchains, we've tried to trim things down. Additionally, in an attempt to keep as closely aligned with the the Linux distribution in use, we've deferred to many of the distribution-provided build tools like autoconf/automake/binutils/cmake/make/libtool/pkgconfig/etc.

As such, the modules in the 2017A toolchain aren't as easy to determine what works with what.

Here is a breakdown, by version suffix, of what composes the 2017A toolchains:

  • -GCCcore-6.3.0 - these were buit with gcc 6.3.0 using the system binutils (which is a deviation from how EasyBuild does it but we thought best to try here).
  • -GCCcore-6.3.0-Python-2.7.12-bare - these were built with the most basic of Python (only what was required) but do not include a proper Python module. If you use these, you must load a Python based on a full toolchain (intel/iomkl/foss). IF you don't load a full Python module and attempt to use these with the system python, THEN it will likely fail.
  • -iimpi/iompi/gompi-2017A - these are packages that required MPI, but didn't necessarily require linear algebra packages like MKL or BLAS/FFTW/LAPACK. These are useful if you want to, for example, use the Intel compilers and MPI but want to use the non-MKL FFTW. [Note: The first letter, 'i' or 'g', indicates whether the compiler was Intel or GCC. The second letter, 'i' or 'o' indicates whether the MPI was Intel or OpenMPI.]
  • -intel/iomkl/foss-2017A - these are the full blown toolchains with compilers/MPI/math. See the table above for details.

There may be variations on that in the future. But that covers most of 2017A for now.

Motivations for the 2017A toolchain

  1. minimizing module count - for example, it makes no sense to have three versions of bzip2 (intel/iomkl/foss) when a single version built with GCC can be used for all.
  2. more closely aligning with Linux distribution provided build tools - in addition to the above, we wanted to make sure core utilities like binutils were well suited for the C library (glibc) installed on the cluster involved. Beyond that, we found that in most cases, the provided build tools like autoconf/automake/cmake/make/libtool/pkgconfig were sufficient, we tried to use the system-provided ones where possible. We also use the system-provided openssl-devel (for timely security updates) and zlib-devel (which is required by openssl-devel).

Our newest toolchains - 2017b

Other than newer versions, the breakdown for 2017b is similar to 2017A. Key differences include:

  • using an updated version of binutils. For 2017A, we went with the system binutils in order to eliminate it as the cause of some of the problems we were having on terra. It turned out not to be a factor. However, we also found that it precluded us from taking advantage of such features like AVX2 (and AVX512) on terra and made it impossible to install TensorFlow on ada (missing needed command). So we've gone back to using the binutils recommended upstream by the EasyBuild people.
  • the "Python-bare" packages are much less prevalent. The motivation for this was to reduce the total number of modules. For example, there is no reason to build an intel, iomkl and foss version of YAML when the GCCcore version would work for all three. However, we've not had much luck convincing the upstream that the -bare option is either viable (which we've demonostrated it is) or clear to users (they have a point there... there has certainly been confusion at times with 2017A). So, for the most part, you won't see as many Python-bare packages this time around (though there are still a few).

Others

In the past, we've offered different combinations including some that use the Portland Groups compilers and some that use different variants of MPI (e.g. MVAPICH, MPICH, MPICH2). If the need arises to build such toolchains in the future, we will consider it. But for now, users are recommended to use one of the toolchains above (preferably the most recent).