Hprc banner tamu.png

Difference between revisions of "SW:Singularity"

From TAMU HPRC
Jump to: navigation, search
(Examples)
(Prepare container)
Line 10: Line 10:
 
User should install Singularity on their desktop/laptop to create/modify the container.
 
User should install Singularity on their desktop/laptop to create/modify the container.
  
* [http://singularity.lbl.gov/quickstart#download-pre-built-images Download pre-built images]
+
* [https://www.sylabs.io/guides/2.5/user-guide/quick_start.html#download-pre-built-images Download pre-built images]
* [http://singularity.lbl.gov/quickstart#build-images-from-scratch Build images from scratch]
+
* [https://www.sylabs.io/guides/2.5/user-guide/quick_start.html#build-images-from-scratch Build images from scratch]
  
 
==== Passing in filesystems ====
 
==== Passing in filesystems ====

Revision as of 17:01, 18 April 2019

Singularity

Singularity (http://singularity.lbl.gov/) is a container solution that does not require root permission to run, so it's suitable for the HPC environment. If you have a software that depends on a different software environment than what are installed on HPRC clusters, Singularity could be a solution to you.

This page describe how to run Singularity container on Ada and Terra clusters.

Prepare container

User should install Singularity on their desktop/laptop to create/modify the container.

Passing in filesystems

To access cluster filesystem in the container, you will need to add these folders in your container

/general
/scratch
/work

Also, replace /home with a symbolic link

mv /home /home.orig
ln -s /general/home /home

Interact with container

Shell

The shell command allows you to spawn a new shell within your container and interact with it as though it were a small virtual machine.

singularity shell hello-world.simg

Executing commands

The exec command allows you to execute a custom command within a container by specifying the image file.

singularity exec hello-world.simg ls -l /
singularity exec hello-world.simg /scratch/user/userid/myprogram

Running a container

Execute the default runscript defined in the container

singularity run hello-world.simg

Running Singularity container on HPRC clusters

Users should prepare Singularity container on their desktop or laptop, where they have sudo right to modify the Singularity container, as users do not have sudo right on HPRC clusters. Then upload the prepared image to their scratch directory on the cluster.

Sample job script for Ada

This script requests 20 cores (one compute node) and 300MB memory per core (6000MB per node).

#BSUB -n 20 -R 'rusage[mem=150] span[ptile=20]' -M 150
#BSUB -J singularity_test
#BSUB -o singularity_test.%J
#BSUB -L /bin/bash
#BSUB -W 20
##NECESSARY JOB SPECIFICATIONS
#BSUB -J singularity1        #Set the job name to "ExampleJob1"
#BSUB -L /bin/bash           #Uses the bash login shell to initialize the job's execution environment.
#BSUB -W 0:30                #Set the wall clock limit to 30 min
#BSUB -n 20                  #Request 20 core
#BSUB -R "span[ptile=20]"    #Request 20 core per node.
#BSUB -R "rusage[mem=300]"   #Request 300MB per process (CPU) for the job
#BSUB -M 300                 #Set the per process enforceable memory limit to 300MB.
#BSUB -o singularity1.%J     #Send stdout and stderr to "singularity1.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
#BSUB -P 123456              #Set billing account to 123456
#BSUB -u email_address       #Send all emails to email_address
#BSUB -B -N                  #Send email on job begin (-B) and end (-N)
#

# execute the default runscript defined in the container 
singularity run centos6_bootstraped.img

# execute a command within container
#  the command should include absolute path if the command is not in the default search path
singularity exec centos6_bootstraped.img /scratch/user/netid/runme.sh


Sample job script for Terra

This job script requests 4 cores and 2.5 GB memory per node.

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE                #Do not propagate environment
#SBATCH --get-user-env=L             #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=test              #Set the job name to "JobExample1"
#SBATCH --time=00:10:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=4                   #Request 4 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=test.%j             #Send stdout/err to "Example1Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address

# execute the default runscript defined in the container 
singularity run centos6_bootstraped.img

# execute a command within container
#  the command should include absolute path if the command is not in the default search path
singularity exec centos6_bootstraped.img /scratch/user/netid/runme.sh

Examples

We have some detailed examples available.

Additional Documents