Singularity is an open-source application for creating and running software containers, designed primarily for high-performance computing on shared Linux-based computing clusters like CARC systems.
Singularity containers provide a custom user space and enable portable, reproducible, stable, and secure software environments on Linux systems. A Singularity container bundles a primary application and all of its dependencies into a single image file, which can also include data, scripts, and other files if desired. In addition, Singularity containers have direct access to the Linux kernel on the host system (e.g., Discovery or Endeavour compute nodes), so there is no substantial performance penalty when using a container compared to using natively installed software on the host system.
With Singularity, you can:
- Install anything you want (based on any Linux operating system)
- Ease installation issues by using pre-built container images
- Ensure the same software stack is used among a research group
- Use the same software stack across Linux systems (e.g., any HPC center)
Using Singularity on CARC systems
Singularity is installed on CARC systems as an RPM package outside of the software module system, so there is no need to load a module in order to use it. Instead, you can directly use the
singularity commands. Please note that the commands that require
sudo will not be available to you on CARC systems.
To see the current version installed, enter
To see the help pages, enter
Getting Singularity container images
A container image is a single executable file that defines the software environment for a container and also runs the container. A single container image can be used to run multiple instances of the same container simultaneously for different jobs.
To get a container image for use on CARC systems, you can either pull (i.e., download) pre-built container images into one of your directories or externally build a custom container image from a definition file and then transfer it to one of your directories.
When pulling containers, Singularity will create a
~/.singularity/cache directory by default to store cache files, which can quickly use a lot of storage space. We recommend changing this location to a directory with more storage space (e.g., one of your scratch directories) by setting the environment variable
SINGULARITY_CACHEDIR. For example:
Add this line to your
~/.bashrc to automatically set this variable every time you log in. Use the command
singularity cache clean to clear the cache.
Pulling pre-built images
You can find pre-built container images at container registries, such as the Singularity Cloud Library or Docker Hub. You may also find them as file downloads from software repositories or websites — you can use a
wget <url> command to download these.
If you have identified a pre-built container image from a registry that includes all the software you need, then you can download this image into one of your directories using the
singularity pull command.
For the Singularity Cloud Library, use the
library:// registry prefix. For example, to get a basic Ubuntu image, enter:
singularity pull library://ubuntu
This will pull an image with the latest Ubuntu version into the current directory and name it
ubuntu_latest.sif. You can give the image a custom name by specifying it before the registry prefix. For example:
singularity pull ubuntu.sif library://ubuntu
This will name the image
For Docker Hub, use the
docker:// registry prefix. Most Docker container images will convert to the Singularity image format. For example, to get an image containing base R from the Rocker Project, enter:
singularity pull docker://rocker/r-base
This will pull an image that contains the latest version of base R into the current directory and name it
To get a specific version of an image, instead of the latest, you can append a tag ID to the image name. For example, to get an image containing base R version 3.6.3, enter:
singularity pull docker://rocker/r-base:3.6.3
Explore the registry page for images to obtain available tags.
Note that Singularity uses temporary space when pulling and converting Docker containers. If you have set the
TMPDIR environment variable to one of your directories and you get errors when pulling Docker containers, try entering
unset TMPDIR to unset the variable and then try pulling again.
Once an image is downloaded, it is ready to be used on CARC systems.
Building images from a definition file
You can build a custom container image by using a definition file that instructs the build process. Because building images requires superuser privileges, we do not allow users to build directly on CARC systems; you must build externally and then transfer the image to CARC systems.
The easiest way to build containers externally for use on CARC systems is to use the cloud-based Singularity Remote Builder, which is integrated with the Singularity Cloud Library. To use this service, you can sign in with your GitHub, GitLab, Google, or Microsoft account. Currently, you are limited to 11 GB of free storage for your container builds. Once signed in, you can submit a definition file to the remote builder and it will build the image and provide a build log. Assuming the build is successful, you can then pull the container image directly to one of your directories on CARC systems with the provided pull command. Instead of using the web interface, you can also create definition files and build remotely from CARC systems by using the
singularity build --remote command.
Alternatively, on your local computer you can install a virtual machine application, like Multipass or Virtual Box, to create a Linux-based virtual machine and then install Singularity within it in order to then build a container image from a definition file. Once built, you can transfer the container image to one of your directories on CARC systems.
To see the definition file that was used to build an image, enter:
singularity inspect --deffile <container_image>
Using the --remote build option
You can also use the
singularity build --remote command on a login node to both submit a definition file to the remote builder and automatically download the built image. For example:
singularity build --remote r.sif r.def
This will submit the definition file (e.g.,
r.def) to the remote builder, build the image remotely and provide a build log in the terminal, and then download the image to your current working directory if the build is successful (e.g., as
Once the image is downloaded, it is ready to be used on CARC systems.
An example definition file and build process
The following example uses the Singularity Remote Builder to build a container image.
1. Connect your remote builder account to CARC systems
First, create and enter an access token in order to access your remote builder account from CARC systems. Navigate to the Access Tokens page (in the drop down menu under your username) and select "Create a New Access Token". Copy the created token and then on a login node run the
singularity remote login command to enter the token. Note that the access token expires every 30 days, so this step will have to be repeated as needed to continue using the remote builder with CARC systems.
2. Create a definition file
Next, create a definition file. The following is an example definition file that uses Clear Linux as the base operating system with base R installed and then installs the R-stan software bundle:
Bootstrap: docker From: clearlinux/r-base:latest %post swupd update --quiet swupd bundle-add --quiet R-stan swupd clean --all %test R --version %environment export LC_ALL=C %runscript R %help Clear Linux with latest version of R linked to OpenBLAS and rstan package installed.
Save your definition file as a text file (e.g.,
rstan.def). For more details on creating definition files, see the official Singularity documentation. We provide some template definition files for various applications in a GitHub repo.
3. Submit the definition file
Next, sign in to the Singularity Remote Builder, navigate to the Remote Builder page, upload or copy and paste the contents of your definition file, and then select "Build". It will then begin to build and display a build log. The build will either be a "Success" or "Failure", indicated at the top of the page. Sometimes successful builds may still contain errors, so it is a good idea to review the build log each time. If the build fails, consult the build log and modify your definition file as needed to correct any errors.
4. Download the container image
Finally, download the image to one of your directories. Assuming the build is successful, the image will be saved to your personal container library. From the Remote Builder page, navigate to the image's library page by selecting the URL under the "Library:" heading at the top of the page; it will be of the form
<username>/default/image. Download the container image to your current working directory on CARC systems by using the provided
singularity pull command at the top of the image page. It will look similar to the following:
singularity pull library://<username>/default/image:latest
Running Singularity container images
You can run Singularity containers using one of the following commands:
singularity shell— for an interactive shell within a container
singularity exec— for executing commands within a container
singularity run— for running a pre-defined runscript within a container
To start an interactive shell within a container, enter:
singularity shell <container_image> Singularity>
Notice that your shell prompt changes to
Singularity> to indicate that you are now working within the container. Enter
exit to exit the container.
To non-interactively execute commands within a container, enter:
singularity exec <container_image> <commands to execute>
This will run the given commands using the software stack within the container. For example, to run an R script using the version of R installed within a container, enter:
singularity exec r.sif Rscript script.R
To run a pre-defined runscript within a container, enter:
singularity run <container_image>
This will execute the defined sequence of commands in the runscript. There may be additional arguments that you can pass to the runscript.
To see the runscript for a container, enter:
singularity inspect --runscript <container_image>
There are a number of options that can be used with each of these commands. Enter
singularity help exec, for example, to see the options available for a given command.
--cleanenv (or shorter
-e) option to establish a clean shell environment within a container. For example:
singularity exec --cleanenv r.sif Rscript script.R
By default, most environment variables in your shell will be passed into the container, which can sometimes cause problems with using software installed within the container. However, note that using the
--cleanenv option will result in Slurm environment variables not being included within the container. For more information on environment variables, see the official documentation.
There is a separate root file system within containers to store installed software and files, but containers can still access and interact with file systems on the host system. Singularity will automatically mount the following directories to a container:
You will automatically have access to these directories as needed. You may want to exclude your home directory from being binded to the container because of software packages installed there (e.g., for Python or R containers). If so, add the
--no-home option to the
singularity command you are using.
To mount additional directories, such as your current working directory or project or scratch directories, use the
--bind option. For example:
singularity exec --bind $PWD,/scratch1/<username> r.sif Rscript script.R
Instead of the
--bind option, you can also set the environment variable
SINGULARITY_BIND to automatically include any directory paths that you want mounted in your containers. For example:
Add this line to your
~/.bashrc to automatically set this variable every time you log in. For more information on bind mounting, see the official documentation.
Running Slurm jobs with Singularity
To use Singularity in interactive Slurm jobs, first request resources with Slurm's
salloc command. Once the resources are granted, you will be logged in to a compute node. Then you can run the
singularity commands interactively.
To use Singularity in batch Slurm jobs, simply include the
singularity commands to be run in your job scripts. For example:
#!/bin/bash #SBATCH --account=<project_id> #SBATCH --partition=main #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=8 #SBATCH --mem=16GB #SBATCH --time=1:00:00 module purge singularity exec --bind $PWD r.sif Rscript script.R
The container will automatically have access to the requested resources.
Running GPU jobs with Singularity
To run GPU jobs with Singularity, you can use GPU-enabled container images with the CUDA toolkit installed. For example, you can convert official NVIDIA Docker container images to Singularity format. Make sure that the versions of the image you pull (e.g., the version of the CUDA library within the container) are compatible with the currently installed NVIDIA drivers on the GPU nodes on CARC systems (use the
nvidia-smi command on a GPU node to see the current driver version). For example, if the driver supports up to CUDA version 11.4, make sure the container image does not have a CUDA version greater than 11.4.
To run a GPU-enabled container on a GPU node, use the
--nv option to allow the container to access the NVIDIA driver on the node:
singularity exec --cleanenv --nv tf.sif python script.py
For more information on using GPUs with Singularity, see the official documentation page for GPU support.
Running MPI jobs with Singularity
It is possible to run MPI jobs with Singularity, and there are different approaches to do so. For more information, see the official documentation page for MPI support.
If you have questions about or need help with Singularity, please submit a help ticket and we will assist you.
CARC Singularity workshop materials
CARC Singularity template definition files
Singularity Remote Builder
Singularity Cloud Library
Intel oneContainer Portal
NVIDIA GPU Cloud Catalog