- 1 Singularity
- 2 Getting a container image
- 3 Interact with container
- 4 Running Singularity container on HPRC clusters
- 5 Build and Modify your own containers
- 6 Examples
- 7 Additional Documents
Singularity (http://singularity.lbl.gov/) is a container solution that does not require root permission to run, so it's suitable for the HPC environment. If you have a software that depends on a different software environment than what are installed on HPRC clusters, Singularity could be a solution to you.
The basic element of a container solution is an image. An image is a file that includes a self-contained environment with both installed executables and the system libraries they depend on. The container runtime mediates between the libraries in the image and the libraries on the host system. In the case of HPRC, the container runtime software is Singularity. It can read many common container image file formats, including Docker.
This page describe how to run Singularity container on Ada and Terra clusters.
Getting a container image
Container images are found in both public and private repositories available on the internet, such as dockerhub and singularityhub. Singularity pull can automatically download and convert those images to the singularity file format. Read more on singularity user guide. A word of caution: dockerhub and singularityhub are public repositories; do not trust unverified sources!
Warning: downloading a large image file is resource-intensive and takes a long time.
Read this before using Singularity pull commands on HPRC clusters
On HPRC, singularity commands must be executed on compute nodes because they are too resource-intensive for login nodes. To use a compute node interactively on Terra or Grace, for command line use, use the Slurm command srun with option --pty. On Ada, use the LSF command bsub with option -I. Details about the options are in the Slurm Manual and LSF Manual respectively.
srun --nodes=1 --ntasks-per-node=1 --mem=512m --time=01:00:00 --pty bash -i
bsub -W 1:00 -n 1 -Is bash
Singularity stores data in a cache directory named .singularity to make future commands faster. By default, this cache will be in your /home directory, which will quickly use up your file quota. Recommended to tell singularity to use a directory in your /scratch space instead.
Some Singularity commands require internet access. In order to access the internet from compute nodes, follow the Web Proxy instructions.
export http_proxy=<ip_address from Web Proxy guide above> export https_proxy=<ip_address from Web Proxy guide above>
Singularity pull examples
Example on Terra, interactive job
Example container image located on dockerhub at https://hub.docker.com/_/hello-world .
srun --nodes=1 --ntasks-per-node=4 --mem=2560M --time=01:00:00 --pty bash -i export SINGULARITY_CACHEDIR=$SCRATCH/.singularity export http_proxy=10.76.5.24:8080 export https_proxy=10.76.5.24:8080 singularity pull hello-world.sif docker://_/hello-world
Example on Terra, batch job
Example container image located on dockerhub at https://hub.docker.com/_/hello-world .
#!/bin/bash ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION #SBATCH --export=NONE #Do not propagate environment ##NECESSARY JOB SPECIFICATIONS #SBATCH --job-name=sing_pull #Set the job name to "sing_pull" #SBATCH --time=01:00:00 #Set the wall clock limit to 1hr #SBATCH --ntasks=4 #Request 4 task #SBATCH --mem=2560M #Request 2560MB (2.5GB) per node #SBATCH --output=sif_dl.%j #Send stdout/err to "sif_dl.[jobID]" ##OPTIONAL JOB SPECIFICATIONS ##SBATCH --account=123456 #Set billing account to 123456 ##SBATCH --mail-type=ALL #Send email on all job events ##SBATCH --mail-user=email_address #Send all emails to email_address # set up environment for download export SINGULARITY_CACHEDIR=$SCRATCH/.singularity export http_proxy=10.76.5.24:8080 export https_proxy=10.76.5.24:8080 # execute download singularity pull hello-world.sif docker://_/hello-world
Interact with container
When a container image file is in place at HPRC, it can be used to control your environment for doing computation tasks.
The shell command allows you to spawn a new shell within your container and interact with it as though it were a small virtual machine.
singularity shell hello-world.sif
The exec command allows you to execute a custom command within a container by specifying the image file.
singularity exec hello-world.sif ls -l / singularity exec hello-world.sif /scratch/user/userid/myprogram
Running a container
Execute the default runscript defined in the container
singularity run hello-world.sif
Running Singularity container on HPRC clusters
Files in a container
The filesystem inside the container is isolated from the filesystem outside the container. In order to access your files on a real, physical filesystem, you have to ensure that filesystem's directory is mounted. By default, Singularity will mount the $HOME directory as well as the current working directory $PWD if it can. To specify additional directories, use the SINGULARITY_BINDPATH environment variable or the --bind command line option.
Recommended: bind your $SCRATCH directory for data files and $TMPDIR for temporary files.
singularity --bind "/scratch,$TMPDIR" [commands]
Read more at:
GPU in a container
If your container has been compiled with CUDA version >= 9, it should work with the local GPUs. Just add the --nv flag to your singularity command.
singularity exec --nv tensorflow-gpu.sif python3
Sample job script for Ada
This script requests 20 cores (one compute node) and 300MB memory per core (6000MB per node). It launches a container with Centos 6 system libraries.
#BSUB -n 20 -R 'rusage[mem=150] span[ptile=20]' -M 150 #BSUB -J singularity_test #BSUB -o singularity_test.%J #BSUB -L /bin/bash #BSUB -W 20 ##NECESSARY JOB SPECIFICATIONS #BSUB -J singularity1 #Set the job name to "ExampleJob1" #BSUB -L /bin/bash #Uses the bash login shell to initialize the job's execution environment. #BSUB -W 0:30 #Set the wall clock limit to 30 min #BSUB -n 20 #Request 20 core #BSUB -R "span[ptile=20]" #Request 20 core per node. #BSUB -R "rusage[mem=300]" #Request 300MB per process (CPU) for the job #BSUB -M 300 #Set the per process enforceable memory limit to 300MB. #BSUB -o singularity1.%J #Send stdout and stderr to "singularity1.[jobID]" ##OPTIONAL JOB SPECIFICATIONS #BSUB -P 123456 #Set billing account to 123456 #BSUB -u email_address #Send all emails to email_address #BSUB -B -N #Send email on job begin (-B) and end (-N) # export SINGULARITY_BINDPATH="/scratch,$TMPDIR" # execute the default runscript defined in the container singularity run centos6_bootstraped.img # execute a command within container # the command should include absolute path if the command is not in the default search path singularity exec centos6_bootstraped.img /scratch/user/netid/runme.sh
Sample job script for Terra
This job script requests 4 cores and 2.5 GB memory per node. It launches a container with Centos 6 system libraries.
#!/bin/bash ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION #SBATCH --export=NONE #Do not propagate environment ##NECESSARY JOB SPECIFICATIONS #SBATCH --job-name=test #Set the job name to "JobExample1" #SBATCH --time=00:10:00 #Set the wall clock limit to 1hr and 30min #SBATCH --ntasks=4 #Request 4 task #SBATCH --mem=2560M #Request 2560MB (2.5GB) per node #SBATCH --output=test.%j #Send stdout/err to "Example1Out.[jobID]" ##OPTIONAL JOB SPECIFICATIONS ##SBATCH --account=123456 #Set billing account to 123456 ##SBATCH --mail-type=ALL #Send email on all job events ##SBATCH --mail-user=email_address #Send all emails to email_address export SINGULARITY_BINDPATH="/scratch,$TMPDIR" # execute the default runscript defined in the container singularity run centos6_bootstraped.img # execute a command within container # the command should include absolute path if the command is not in the default search path singularity exec centos6_bootstraped.img /scratch/user/netid/runme.sh
Build and Modify your own containers
User should install Singularity on their desktop/laptop to create/modify the container. Building and modifying containers usually requires sudo with root privileges, which is not available to users on HPRC for security reasons. Users should prepare Singularity container on their desktop or laptop, where they have sudo right to modify the Singularity container. Then upload the prepared image to their scratch directory on the cluster.
- External guide Build images from scratch.
- Also see our detailed examples page.
Passing in filesystems
Optional: To access cluster filesystem in the container, it is convenient to pre-create these folders in your container
/general /scratch /work
Optional: Also, replace /home with a symbolic link
mv /home /home.orig ln -s /general/home /home
We have some detailed examples available.