A toolchain on a TAMU HPRC cluster is a collection of tools used for building software. They typically include:
- a compiler collection providing basic language support (C/Fortran/C++)
- a MPI implementation for multi-node communication
- a set of linear algebra libraries (FFTW/BLAS/LAPACK) for accelerated math
Most modules on our clusters include the toolchain that was used to build them in their version tag (e.g. .Python/3.5.2-intel-2016b was built with the intel/2016b toolchain below).
Mixing components from different toolchains almost always leads to problems. For example, if you mix Intel MPI modules with OpenMPI modules you can guarantee your program will not run (even if you manage to get it to compile). We recommend you always use modules from the same (sub)toolchains. [Since late 2016 we've been looking at EasyBuild's --minimal-toolchain option to cut down on module count, so "GCCcore" is now a common new "subtoolchain"]
|Toolchain||Compiler(s)||MPI||Linear Algebra||Available on:|
|intel||Composed of the Intel Cluster Toolkit built upon a particular version of GCC. These are the recommended toolchains for HPRC Intel-based clusters.|
|iomkl||A combination of the Intel compiler and math kernel library components with OpenMPI.|
|foss||Based upon "Free and Open-Source Software". These toolchains produce slightly slower code but do provide code portability to other platforms (e.g. those that don't have an Intel compiler).|
- For details on using Intel MKL's BLAS and ScaLAPACK (and at some point FFTW), see our current Intel MKL page.
- Note: OpenMPI 1.10.x is largely incompatibale with all previous and later versions and should probably be avoided.
Future (not yet supported)
binutils/2.28 CUDA/9.1.85 FFTW/3.3.7 GCC/6.4.0 icc/2018.1.163-GCC-6.4.0-2.28 ifort/2018.1.163-GCC-6.4.0-2.28 impi/2018.1.163-iccifort-2018.1.163-GCC-6.4.0-2.28 numactl/2.0.11-GCCcore-6.4.0 OpenBLAS/0.2.20-GCC-6.4.0-2.28 OpenMPI/2.1.2-gcccuda-2018a ScaLAPACK/2.0.2-gompic-2018a-OpenBLAS-0.2.20
Our recommended toolchains - 2017A
Last updated: May 25, 2018
A breakdown of our 2017A toolchains
For the past few years, TAMU HPRC has been using EasyBuild to deploy commonly used HPC libraries. In the past we've tried to use what EasyBuild contributors (who have HPC systems of their own) use/deploy as a guide for software deployment. For 2017A, we've set off on a new path.
With the recent addition of the --minimal-toolchain option, which allows us to minimize the overall module count and share common modules among many toolchains, we've tried to trim things down. Additionally, in an attempt to keep as closely aligned with the the Linux distribution in use, we've deferred to many of the distribution-provided build tools like autoconf/automake/binutils/cmake/make/libtool/pkgconfig/etc.
As such, the modules in the 2017A toolchain aren't as easy to determine what works with what.
Here is a breakdown, by version suffix, of what composes the 2017A toolchains:
- -GCCcore-6.3.0 - these were buit with gcc 6.3.0 using the system binutils (which is a deviation from how EasyBuild does it but we thought best to try here).
- -GCCcore-6.3.0-Python-2.7.12-bare - these were built with the most basic of Python (only what was required) but do not include a proper Python module. If you use these, you must load a Python based on a full toolchain (intel/iomkl/foss). IF you don't load a full Python module and attempt to use these with the system python, THEN it will likely fail.
- -iimpi/iompi/gompi-2017A - these are packages that required MPI, but didn't necessarily require linear algebra packages like MKL or BLAS/FFTW/LAPACK. These are useful if you want to, for example, use the Intel compilers and MPI but want to use the non-MKL FFTW. [Note: The first letter, 'i' or 'g', indicates whether the compiler was Intel or GCC. The second letter, 'i' or 'o' indicates whether the MPI was Intel or OpenMPI.]
- -intel/iomkl/foss-2017A - these are the full blown toolchains with compilers/MPI/math. See the table above for details.
There may be variations on that in the future. But that covers most of 2017A for now.
Motivations for the 2017A toolchain
- minimizing module count - for example, it makes no sense to have three versions of bzip2 (intel/iomkl/foss) when a single version built with GCC can be used for all.
- more closely aligning with Linux distribution provided build tools - in addition to the above, we wanted to make sure core utilities like binutils were well suited for the C library (glibc) installed on the cluster involved. Beyond that, we found that in most cases, the provided build tools like autoconf/automake/cmake/make/libtool/pkgconfig were sufficient, we tried to use the system-provided ones where possible. We also use the system-provided openssl-devel (for timely security updates) and zlib-devel (which is required by openssl-devel).
Our newest toolchains - 2017b
Other than newer versions, the breakdown for 2017b is similar to 2017A. Key differences include:
- using an updated version of binutils. For 2017A, we went with the system binutils in order to eliminate it as the cause of some of the problems we were having on terra. It turned out not to be a factor. However, we also found that it precluded us from taking advantage of such features like AVX2 (and AVX512) on terra and made it impossible to install TensorFlow on ada (missing needed command). So we've gone back to using the binutils recommended upstream by the EasyBuild people.
- the "Python-bare" packages are much less prevalent. The motivation for this was to reduce the total number of modules. For example, there is no reason to build an intel, iomkl and foss version of YAML when the GCCcore version would work for all three. However, we've not had much luck convincing the upstream that the -bare option is either viable (which we've demonostrated it is) or clear to users (they have a point there... there has certainly been confusion at times with 2017A). So, for the most part, you won't see as many Python-bare packages this time around (though there are still a few).
In the past, we've offered different combinations including some that use the Portland Groups compilers and some that use different variants of MPI (e.g. MVAPICH, MPICH, MPICH2). If the need arises to build such toolchains in the future, we will consider it. But for now, users are recommended to use one of the toolchains above (preferably the most recent).