Toolchains
Overview
A toolchain on a TAMU HPRC cluster is a collection of tools used for building software. They typically include:
- a compiler collection providing basic language support (C/Fortran/C++)
- a MPI implementation for multi-node communication
- a set of linear algebra libraries (FFTW/BLAS/LAPACK) for accelerated math
The toolchain used to build a module can be found by using the module spider command using the full module name
module spider Python/3.11.5
You will need to load all module(s) on any one of the lines below before the "Python/3.11.5" module is available to load.
GCCcore/13.2.0
Mixing components from different toolchains almost always leads to problems. For example, if you mix Intel MPI modules with OpenMPI modules you can guarantee your program will not run (even if you manage to get it to compile). We recommend you always use modules from the same (sub)toolchains.
Currently Supported
To see a list of currently supported toolchains on an HPRC cluster, run the toolchains -a command
toolchains -a
Additional Details
- For details on using Intel MKL's BLAS and ScaLAPACK (and at some point FFTW), see our current Intel MKL page. Also see the buildenv modules section below for a number of useful environment variables when building with different toolchains.
- Note: OpenMPI 1.10.x is largely incompatible with all previous and later versions and should probably be avoided.
Our recommended toolchains
- In general, we recommend the latest intel toolchain with Intel compilers, MPI and linear algebra (MKL) for the best performance on HPRC clusters.
- In many cases, using icc instead of gcc takes a lot of effort so you may find things easier with the foss toolchain.
- As of August 2024, the 2022b toolchain suite is recommended mainly because many software modules are created with this version. Older versions may have more modules available and newer ones might offer some bug fixes and speedups, but for now we recommend 2022b (e.g. foss/2022b).
A breakdown of our 2022b toolchains
For the past few years, TAMU HPRC has been using EasyBuild to deploy commonly used HPC libraries.
As such, the modules in the 2022b toolchain aren't as easy to determine what works with what.
Components by version suffix
- -GCCcore-12.2.0 - these were built with gcc 12.2.0 using the system binutils
- -GCC-12.2.0 - these were built with gcc 12.2.0 using the binutils/2.39 which is also loaded
- -foss-2022b - this is the fully populated toolchain with compilers/MPI/math. See the table above for details.
There may be variations on that in the future. But that covers most of 2022b for now.
Motivations
- minimizing module count - for example, it makes no sense to have three versions of bzip2 (intel/iomkl/foss) when a single version built with GCC can be used for all.
- more closely aligning with Linux distribution provided build tools - in addition to the above, we wanted to make sure core utilities like binutils were well suited for the C library (glibc) installed on the cluster involved. Beyond that, we found that in most cases, the provided build tools like autoconf/automake/cmake/make/libtool/pkgconfig were sufficient, we tried to use the system-provided ones where possible. We also use the system-provided openssl-devel (for timely security updates) and zlib-devel (which is required by openssl-devel).
Deficiencies
- by relying upon the system binutils, we were:
- unable to fully use AVX2 and AVX512 vector extensions.
- unable to build certain programs due to missing tools provided by newer binutils
Others
In the past, we've offered different combinations including some that use the Portland Groups compilers and some that use different variants of MPI (e.g. MVAPICH, MPICH, MPICH2). If the need arises to build such toolchains in the future, we will consider it. But for now, users are recommended to use one of the toolchains above (preferably the most recent).
The buildenv modules
For some toolchains above there exists a buildenv module that:
- loads the toolchain
- sets a number of useful flags used when compiling/linking programs with that toolchain
Examples
For example, here is what buildenv provides for the foss/2022b toolchain.
foss/2022b
$ module load foss/2022b buildenv
$ module -t show buildenv
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/sw/eb/mods/all/MPI/GCC/12.2.0/OpenMPI/4.1.4/buildenv/default.lua:
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
help([[
Description
===========
This module sets a group of environment variables for compilers, linkers, maths libraries, etc., that
you can use to easily transition between toolchains when building your software. To query the variables being set
please use: module show <this module name>
More information
================
- Homepage: None
]])
whatis("Description: This module sets a group of environment variables for compilers, linkers, maths libraries, etc., that
you can use to easily transition between toolchains when building your software. To query the variables being set
please use: module show <this module name>")
whatis("Homepage: None")
whatis("URL: None")
conflict("buildenv")
setenv("EBROOTBUILDENV","/sw/eb/sw/buildenv/default-foss-2022b")
setenv("EBVERSIONBUILDENV","default")
setenv("EBDEVELBUILDENV","/sw/eb/sw/buildenv/default-foss-2022b/easybuild/MPI-GCC-12.2.0-OpenMPI-4.1.4-buildenv-default-easybuild-devel")
setenv("BLAS_INC_DIR","/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/include/flexiblas")
setenv("BLAS_LAPACK_INC_DIR","/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/include/flexiblas")
setenv("BLAS_LAPACK_LIB_DIR","/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/lib")
setenv("BLAS_LAPACK_MT_SHARED_LIBS","libflexiblas.so,libgfortran.so")
setenv("BLAS_LAPACK_MT_STATIC_LIBS","libflexiblas.a,libgfortran.a")
setenv("BLAS_LAPACK_SHARED_LIBS","libflexiblas.so,libgfortran.so")
setenv("BLAS_LAPACK_STATIC_LIBS","libflexiblas.a,libgfortran.a")
setenv("BLAS_LIB_DIR","/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/lib")
setenv("BLAS_MT_SHARED_LIBS","libflexiblas.so,libgfortran.so")
setenv("BLAS_MT_STATIC_LIBS","libflexiblas.a,libgfortran.a")
setenv("BLAS_SHARED_LIBS","libflexiblas.so,libgfortran.so")
setenv("BLAS_STATIC_LIBS","libflexiblas.a,libgfortran.a")
setenv("CC","gcc")
setenv("CFLAGS","-O2 -ftree-vectorize -march=native -fno-math-errno")
setenv("CPPFLAGS","-I/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/include -I/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/include -I/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/include/flexiblas")
setenv("CXX","g++")
setenv("CXXFLAGS","-O2 -ftree-vectorize -march=native -fno-math-errno")
setenv("F77","gfortran")
setenv("F90","gfortran")
setenv("F90FLAGS","-O2 -ftree-vectorize -march=native -fno-math-errno")
setenv("FC","gfortran")
setenv("FCFLAGS","-O2 -ftree-vectorize -march=native -fno-math-errno")
setenv("FFLAGS","-O2 -ftree-vectorize -march=native -fno-math-errno")
setenv("FFTW_INC_DIR","/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/include")
setenv("FFTW_LIB_DIR","/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/lib")
setenv("FFTW_SHARED_LIBS","libfftw3.so")
setenv("FFTW_SHARED_LIBS_MT","libfftw3.so,libpthread.so")
setenv("FFTW_STATIC_LIBS","libfftw3.a")
setenv("FFTW_STATIC_LIBS_MT","libfftw3.a,libpthread.a")
setenv("FFT_INC_DIR","/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/include")
setenv("FFT_LIB_DIR","/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/lib")
setenv("FFT_SHARED_LIBS","libfftw3.so")
setenv("FFT_SHARED_LIBS_MT","libfftw3.so,libpthread.so")
setenv("FFT_STATIC_LIBS","libfftw3.a")
setenv("FFT_STATIC_LIBS_MT","libfftw3.a,libpthread.a")
setenv("FLIBS","-lgfortran")
setenv("LAPACK_INC_DIR","/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/include/flexiblas")
setenv("LAPACK_LIB_DIR","/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/lib")
setenv("LAPACK_MT_SHARED_LIBS","libflexiblas.so,libgfortran.so")
setenv("LAPACK_MT_STATIC_LIBS","libflexiblas.a,libgfortran.a")
setenv("LAPACK_SHARED_LIBS","libflexiblas.so,libgfortran.so")
setenv("LAPACK_STATIC_LIBS","libflexiblas.a,libgfortran.a")
setenv("LDFLAGS","-L/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/lib64 -L/sw/eb/sw/FFTW/3.3.10-GCC-12.2.0/lib -L/sw/eb/sw/ScaLAPACK/2.2.0-gompi-2022b-fb/lib64 -L/sw/eb/sw/ScaLAPACK/2.2.0-gompi-2022b-fb/lib -L/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/lib64 -L/sw/eb/sw/FlexiBLAS/3.2.1-GCC-12.2.0/lib -L/sw/eb/sw/GCCcore/12.2.0/lib64 -L/sw/eb/sw/GCCcore/12.2.0/lib")
setenv("LIBBLAS","-lflexiblas -lgfortran")
setenv("LIBBLAS_MT","-lflexiblas -lgfortran")
setenv("LIBFFT","-lfftw3")
setenv("LIBFFT_MT","-lfftw3 -lpthread")
setenv("LIBLAPACK","-lflexiblas -lgfortran")
setenv("LIBLAPACK_MT","-lflexiblas -lgfortran")
setenv("LIBLAPACK_MT_ONLY","-lflexiblas -lgfortran")
setenv("LIBLAPACK_ONLY","-lflexiblas -lgfortran")
setenv("LIBS","-lm -lpthread")
setenv("LIBSCALAPACK","-lscalapack -lflexiblas -lgfortran")
setenv("LIBSCALAPACK_MT","-lscalapack -lflexiblas -lpthread -lgfortran")
setenv("LIBSCALAPACK_MT_ONLY","-lscalapack -lgfortran")
setenv("LIBSCALAPACK_ONLY","-lscalapack -lgfortran")
setenv("MPICC","mpicc")
setenv("MPICXX","mpicxx")
setenv("MPIF77","mpifort")
setenv("MPIF90","mpifort")
setenv("MPIFC","mpifort")
setenv("MPI_INC_DIR","/sw/eb/sw/OpenMPI/4.1.4-GCC-12.2.0/include")
setenv("MPI_LIB_DIR","/sw/eb/sw/OpenMPI/4.1.4-GCC-12.2.0/lib64")
setenv("MPI_LIB_SHARED","/sw/eb/sw/OpenMPI/4.1.4-GCC-12.2.0/lib64/libmpi.so")
setenv("MPI_LIB_STATIC","")
setenv("OMPI_CC","gcc")
setenv("OMPI_CXX","g++")
setenv("OMPI_F77","gfortran")
setenv("OMPI_F90","gfortran")
setenv("OMPI_FC","gfortran")
setenv("OPTFLAGS","-O2 -ftree-vectorize -march=native")
setenv("PRECFLAGS","-fno-math-errno")
setenv("SCALAPACK_INC_DIR","")
setenv("SCALAPACK_LIB_DIR","/sw/eb/sw/ScaLAPACK/2.2.0-gompi-2022b-fb/lib")
setenv("SCALAPACK_MT_SHARED_LIBS","libscalapack.so,libflexiblas.so,libgfortran.so,libpthread.so")
setenv("SCALAPACK_MT_STATIC_LIBS","libscalapack.a,libflexiblas.a,libgfortran.a,libpthread.a")
setenv("SCALAPACK_SHARED_LIBS","libscalapack.so,libflexiblas.so,libgfortran.so")
setenv("SCALAPACK_STATIC_LIBS","libscalapack.a,libflexiblas.a,libgfortran.a")