Singularity/Apptainer
Containers are a way to package an environment along with an executable so that no additional installation or setup is required to run it on a different machine. Singularity, or its equivalent Apptainer, is a container runtime that is designed specifically for use on HPC clusters. If you have a software that depends on a different software environment than what is installed on our HPRC clusters, Singularity/Apptainer could be a solution to you.
The basic element of a container solution is an image. An image is a file that includes a self-contained environment with both installed executables and the system libraries they depend on. The container runtime mediates between the libraries in the image and the libraries on the host system.
HPRC provides multiple container runtimes.
- On the Grace cluster, Singularity is the only allowed container runtime.
- On the FASTER cluster, Singularity and Charliecloud are both supported.
- On the ACES cluster, Singularity/Apptainer (varies by node), Charliecloud, and Docker are supported.
- On the Launch cluster, Singularity and Charliecloud are supported.
This page describes how to use Singularity and Apptainer on HPRC clusters. The usage is identical (Apptainer accepts Singularity commands). Beyond this point, the term Singularity will be used and it is implied that the same applies to Apptainer.
Why use Containers
- Shareability: you can share your container image file with others by uploading it to a public repository, and download files shared by others.
- Portability: you can use image files made for any computer with the same architecture (x84-64).
- Reproducibility: cluster environments can change whenever the locally installed software gets updated. Container users are largely unaffected by this.
Why use Singularity
- Security: Singularity grants the user no additional privileges or permissions, so you can't harm the cluster by using singularity, nor can other users harm you.
- Independence: Singularity does not require root permission to run, so you don't need to ask your administrators for help installing anything.
- Speed: Singularity was designed to run "close to the hardware". It can take advantage of high-performance cluster technologies like Infiniband and GPUs.
Getting a container image
Container images are found in both public and private repositories available on the internet, such as DockerHub. Singularity pull can automatically download and convert those images
to the singularity file format. Read more about singularity pull in the Singularity user guide or the Apptainer user guide. Also see our detailed examples page for other popular repositories. Also see our Docker page to learn more about working with Docker images on the ACES cluster.
Caution: dockerhub hosts public repositories; do not trust unverified sources!
Warning: downloading a large image file is resource-intensive and takes a long time.
Read this before using Singularity pull commands on HPRC clusters
On HPRC clusters, Singularity commands must be executed on compute nodes because they are too resource-intensive for login nodes. To reach a compute node interactively for command line use, there are two options:
- use the VNC Interactive App in the Portal, or
- use
srun
from a login node with the --pty option. Read more aboutsrun
options in the Slurm manual.
Singularity stores data in a cache directory named .singularity to make future commands faster. By default, this cache will be in your /home directory, which will quickly use up your file quota. It is recommended to tell singularity to use a directory on the local /tmp disk instead.
export SINGULARITY_CACHEDIR=$TMPDIR/.singularity
Some Singularity commands require internet access. In order to access the internet from compute nodes, use the Web Proxy module.
module load WebProxy
Singularity pull examples
These examples fetch a container image located on dockerhub at
https://hub.docker.com/_/hello-world . The file will be named
hello-world.sif
and it will be in your scratch directory.
Example on Grace, interactive job
On a login node:
srun --nodes=1 --ntasks-per-node=4 --mem=30G --time=01:00:00 --pty bash -i
#(wait for job to start)
On a compute node:
cd $SCRATCH
export SINGULARITY_CACHEDIR=$TMPDIR/.singularity
module load WebProxy
singularity pull hello-world.sif docker://hello-world
#(wait for download and convert)
exit
Example on Grace, batch job
Create a file named singularity_pull.sh
:
#!/bin/bash
## JOB SPECIFICATIONS
#SBATCH --job-name=singularity_pull #Set the job name to "singularity_pull"
#SBATCH --time=01:00:00 #Set the wall clock limit to 1hr
#SBATCH --nodes=1 #Request 1 node
#SBATCH --ntasks=4 #Request 4 task
#SBATCH --mem=30G #Request 30GB per node
#SBATCH --output=singularity_pull.%j #Send stdout/err to "singularity_pull.[jobID]"
# set up environment for download
cd $SCRATCH
export SINGULARITY_CACHEDIR=$TMPDIR/.singularity
module load WebProxy
# execute download
singularity pull hello-world.sif docker://hello-world
On a login node,
sbatch singularity_pull.sh
Interact with container
When a container image file is in place at HPRC, it can be used to control your environment for doing computation tasks.
These examples use a container image almalinux.sif
from https://hub.docker.com/_/almalinux, which is a lightweight derivative of the Redhat OS.
Shell
The shell command allows you to spawn a new shell within your container
and interact with it one command at a time. Don't forget to exit
when you're done.
singularity shell <image.sif>
Example:
[user@compute dir]$ singularity shell almalinux.sif Singularity> ls / bin dev etc lib media opt root sbin singularity sys usr ch environment home lib64 mnt proc run scratch srv tmp var Singularity> cat /etc/redhat-release AlmaLinux release 8.8 (Sapphire Caracal) Singularity> exit exit [user@compute dir]$
Executing commands
The exec command allows you to execute a custom command within a container by specifying the image file and the command.
singularity exec <image.sif> <command>
The command can refer to an executable installed inside the container, or to a script located on a mounted cluster filesystem (see Files in and outside a container).
Example program installed inside image:
singularity exec almalinux.sif bash --version
Example executable file myscript.sh
:
- starts with
#!/usr/bin/env bash
- has the executable permission set by
chmod u+x myscript.sh
- is located in the current directory
singularity exec almalinux.sif ./myscript.sh
Running a container
Execute the default runscript defined in the container
singularity run hello-world.sif
Running Singularity container on HPRC clusters
Files in and outside a container
The filesystem inside the container is isolated from the filesystem
outside the container. In order to access your files on a real, physical
filesystem, you have to ensure that filesystem's directory is mounted.
To specify a map between an inside and an outside directory, use the SINGULARITY_BINDPATH
environment variable or the --bind
command line option.
export SINGULARITY_BINDPATH="/dir/outside:/dir/inside"
or
singularity --bind "/dir/outside:/dir/inside" <commands>
Singularity on HPRC clusters is configured to automatically bind /tmp
, /scratch
, your home directory $HOME
and the current working directory $PWD
to their same paths inside the container. These can be disabled by the --no-mount
option if needed.
Read more at:
GPU in a container
If your container has been compiled an appropriate CUDA version, it should
work with the local GPUs. Just add the --nv
flag to your singularity
command.
Example
singularity exec --nv tensorflow-gpu.sif python3
Example Batch Job
Singularity is available on compute nodes with no extra commands needed.
Assuming example.sh
, myscript.sh
, and almalinux.sif
are all in the same directory,
Example batch file example.sh
:
#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=test #Set the job name to "JobExample1"
#SBATCH --time=00:10:00 #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=4 #Request 4 task
#SBATCH --mem=30G #Request 2560MB (2.5GB) per node
#SBATCH --output=test.%j #Send stdout/err to "Example1Out.[jobID]"
singularity exec almalinux.sif ./myscript.sh
On a login node:
sbatch example.sh
Interactive apps via Portal
Some of the Graphical Interactive Apps in HPRC Portal support the use of Singularity environments. Currently:
- Jupyter Notebook. You must provide a singularity image that has a working Jupyter app installed in it.
You can request other apps to be supported in this way. We intend to increase this list going forward.
Saving Data in the Container Filesystem
Why Overlay
Have any of these problems?
- Need to install additional software in a container (i.e.
pip install
)
- Wish to save your work persistently
- Millions of files and running out of quota
In Docker-like container environments, the user has root privileges during runtime. Thus, the user is free to modify the container filesystem at-will. However, in Singularity this is not the case. Generally, a singularity user will not have root privileges during runtime, and singularity images are treated as read-only. The container filesystem is ephemeral and disappears when the runtime shuts down. A solution to this problem is called an overlay image, which extends the filesystem inside the container. When you use an overlay, any files you create in the container filesystem will be saved persistently in the overlay image. As far as the real, physical file system is concerned, the whole overlay is a single file: it counts as 1 towards your quota.
Create an Overlay Image
Creation of an overlay file requires a feature named
mkfs.ext3 -d
, which is available on FASTER and ACES but not on Grace.
Create an Overlay Image on FASTER or ACES
To create a 1 GiB overlay image named overlay.img
:
singularity overlay create --size 1024 overlay.img
Create an Overlay Image on Grace
The needed feature is installed in a standard Ubuntu container. Example:
- Image file: ubuntu-18.04.sif
- From docker://ubuntu:18.04
- Located at /scratch/data/Singularity/images
Creation of an overlay image must be done on the /tmp file system on
Grace because the Lustre filsystem (/scratch, etc) lacks some necessary
features. $TMPDIR
points to a job-specific location in /tmp.
Creation of an overlay filesystem begins with setting up a pair of empty
directories named upper
and work
. Creating these directories
manually is the only way to ensure they have the correct ownership and
permissions.
Creation of an overlay file also means pre-filling the virtual disk with
initial data, which is easiest if it's all zeros. The tool dd
does
this task.
Choose a size for your overlay file and a block size (recommended: 1M). Divide the two to get the number of blocks. Example:
- Overlay size: 200 MB
- Block size: 1 MB
- Block count: 200
Assuming you start from a login node, execute the following commands.
Substitute your size choices and the /path/to/final/location
(recommended: $SCRATCH
).
srun --mem=512m --time=01:00:00 --pty bash -i
singularity shell ubuntu-18.04.sif
cd $TMPDIR
mkdir -p overlay_tmp/upper overlay_tmp/work
dd if=/dev/zero of=overlay.img count=200 bs=1M
mkfs.ext3 -d overlay_tmp overlay.img
cp overlay.img $SCRATCH
Use an Overlay Image
Whenever you execute a singularity command, you can add --overlay
option along the with the path to an overlay image file.
singularity <command> --overlay overlay.img <...>
For example:
[user@compute dir]$ singularity shell --overlay overlay.img ubuntu-18.04.sif Singularity> mkdir /new_dir Singularity> touch /new_dir/new_file Singularity> exit exit [user@compute dir]$ singularity shell --overlay overlay.img ubuntu-18.04.sif Singularity> ls /new_dir new_file
Only one running process can access an overlay file at a given time, so if you have multiple batch jobs, you will need to do some extra steps to coordinate them.
Build and Modify your own containers
Building and modifying containers on older operating systems usually requires sudo with root privileges, which is not available to users on HPRC for security reasons. The Grace cluster currently runs CentOS 7, which makes building container images there impossible. Please build container images elsewhere and then copy them to the Grace cluster.
Newer operating systems (Redhat 8+ and derivatives) provide a feature called user namespace which enables rootless building of container images. This is available on the FASTER, ACES, and Launch clusters.
Instructions
- Singularity guide to build images from scratch
- Apptainer guide to build images from scratch
- Also see our detailed examples page.
Enable mounting of filesystems
Optional: To mount cluster filesystem in the container, it is convenient to pre-create these empty directories in your container:
/general
/scratch
/work
Examples
We have some detailed examples available.
HPRC Publications
HPRC has published a paper on the topic of container use in HPC.
- Richard Lawrence, Dhruva K. Chakravorty, Lisa M. Perez, Wesley Brashear, Zhenhua He, Joshua Winchell, and Honggao Liu. 2024. Container Adoption in Campus High Performance Computing at Texas A&M University. In Practice and Experience in Advanced Research Computing (PEARC '24), July 21--25, 2024, Providence, RI, USA. ACM, New York, NY, USA 7 Pages. DOI=10.1145/3626203.3670550