Skip to content

Singularity on IT4Innovations

On our clusters, the Singularity images of main linux distributions are prepared. List of available singularity images (05.04.2018):

   Salomon                 Anselm
      ├── CentOS             ├── CentOS
      │   ├── 6.9            │   ├── 6.9
      │   ├── 6.9-MIC        │   ├── 6.9-GPU
      │   ├── 7.4            │   ├── 7.4
      │   └── 7.4-MIC        │   └── 7.4-GPU
      ├── Debian             ├── Debian
      │   └── 8.0            │   ├── 8.0
      └── Ubuntu             │   └── 8.0-GPU
          └── 16.04          └── Ubuntu
                                 ├── 16.04
                                 └── 16.04-GPU

Current information about available Singularity images can be obtained by the ml av command. The Images are listed in the OS section.

The bootstrap scripts, wrappers, features, etc. are located here.

Note

The images with graphic card support are marked as -GPU and images with Intel Xeon Phi support are marked as -MIC

IT4Innovations Singularity Wrappers

For better user experience with Singularity containers we prepared several wrappers:

  • image-exec
  • image-mpi
  • image-run
  • image-shell
  • image-update

Listed wrappers help you to use prepared Singularity images loaded as modules. You can easily load Singularity image like any other module on the cluster by ml OS/version command. After the module is loaded for the first time, the prepared image is copied into your home folder and is ready for use. When you load the module next time, the version of image is checked and image update (if exists) is offered. Then you can update your copy of image by the image-update command.

Warning

With image update, all user changes to the image will be overridden.

The runscript inside the Singularity image can be run by the image-run command. This command automatically mounts the /scratch and /apps storage and invokes the image as writable, so user changes can be made.

Very similar to image-run is the image-exec command. The only difference is that image-exec runs user-defined command instead of the runscript. In this case, the command to be run is specified as a parameter.

For development is very useful to use interactive shell inside the Singularity container. In this interactive shell you can make any changes to the image you want, but be aware that you can not use the sudo privileged commands directly on the cluster. To invoke interactive shell easily just use the image-shell command.

Another useful feature of the Singularity is direct support of OpenMPI. For proper MPI function, you have to install the same version of OpenMPI inside the image as you use on cluster. OpenMPI/2.1.1 is installed in prepared images. The MPI must be started outside the container. The easiest way to start the MPI is to use the image-mpi command. This command has the same parameters as the mpirun. Thanks to that, there is no difference between running normal MPI application and MPI application in Singularity container.

Examples

In the examples, we will use the prepared Singularity images.

Load Image

$ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220133305.img

Tip

After the module is loaded for the first time, the prepared image is copied into your home folder to the .singularity/images subfolder.

Wrappers

image-exec

Executes the given command inside the Singularity image. The container is in this case started, then the command is executed and the container is stopped.

$ ml CentOS/7.3
Your image of CentOS/7.3 is at location: /home/login/.singularity/images/CentOS-7.3_20180220104046.img
$ image-exec cat /etc/centos-release
CentOS Linux release 7.3.1708 (Core)

image-mpi

MPI wrapper - see more in the chapter Examples MPI.

image-run

This command runs the runscript inside the Singularity image. Note, that the prepared images don't contain a runscript.

image-shell

Invokes an interactive shell inside the Singularity image.

$ ml CentOS/7.3
$ image-shell
Singularity: Invoking an interactive shell within container...

Singularity CentOS-7.3_20180220104046.img:~>

Update Image

This command is for updating your local copy of the Singularity image. The local copy is overridden in this case.

$ ml CentOS/6.9
New version of CentOS image was found. (New: CentOS-6.9_20180220092823.img Old: CentOS-6.9_20170220092823.img)
For updating image use: image-update
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20170220092823.img
$ image-update
New version of CentOS image was found. (New: CentOS-6.9_20180220092823.img Old: CentOS-6.9_20170220092823.img)
Do you want to update local copy? (WARNING all user modification will be deleted) [y/N]: y
Updating image  CentOS-6.9_20180220092823.img
       2.71G 100%  199.49MB/s    0:00:12 (xfer#1, to-check=0/1)

sent 2.71G bytes  received 31 bytes  163.98M bytes/sec
total size is 2.71G  speedup is 1.00
New version is ready. (/home/login/.singularity/images/CentOS-6.9_20180220092823.img)

Intel Xeon Phi Cards - MIC

In the following example, we are using a job submitted by the command: qsub -A PROJECT -q qprod -l select=1:mpiprocs=24:accelerator=true -I

Info

The MIC image was prepared only for the Salomon cluster.

Code for the Offload Test

#include <stdio.h>
#include <thread>
#include <stdlib.h>
#include <unistd.h>

int main() {

  char hostname[1024];
  gethostname(hostname, 1024);

  unsigned int nthreads = std::thread::hardware_concurrency();
  printf("Hello world, #of cores: %d\n",nthreads);
  #pragma offload target(mic)
  {
    nthreads = std::thread::hardware_concurrency();
    printf("Hello world from MIC, #of cores: %d\n",nthreads);
  }
}

Compile and Run

[login@r38u03n975 ~]$ ml CentOS/6.9-MIC
Your image of CentOS/6.9-MIC is at location: /home/login/.singularity/images/CentOS-6.9-MIC_20180220112004.img
[login@r38u03n975 ~]$ image-shell
Singularity: Invoking an interactive shell within container...

Singularity CentOS-6.9-MIC_20180220112004.img:~> ml intel/2017b
Singularity CentOS-6.9-MIC_20180220112004.img:~> ml

Currently Loaded Modules:
  1) GCCcore/6.3.0                 3) icc/2017.1.132-GCC-6.3.0-2.27     5) iccifort/2017.1.132-GCC-6.3.0-2.27                   7) iimpi/2017a                   9) intel/2017a
  2) binutils/2.27-GCCcore-6.3.0   4) ifort/2017.1.132-GCC-6.3.0-2.27   6) impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27   8) imkl/2017.1.132-iimpi-2017a
Singularity CentOS-6.9-MIC_20180220112004.img:~> icpc -std=gnu++11 -qoffload=optional  hello.c -o hello-host
Singularity CentOS-6.9-MIC_20180220112004.img:~> ./hello-host
Hello world, #of cores: 24
Hello world from MIC, #of cores: 244

GPU Image

In the following example, we are using a job submitted by the command: qsub -A PROJECT -q qnvidia -l select=1:ncpus=16:mpiprocs=16 -l walltime=01:00:00 -I

Note

The GPU image was prepared only for the Anselm cluster.

Checking NVIDIA Driver Inside Image

[login@cn199.anselm ~]$ image-shell
Singularity: Invoking an interactive shell within container...

Singularity CentOS-6.9-GPU_20180309130604.img:~> ml
No modules loaded
Singularity CentOS-6.9-GPU_20180309130604.img:~> nvidia-smi
Mon Mar 12 07:07:53 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30                 Driver Version: 390.30                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K20m          Off  | 00000000:02:00.0 Off |                    0 |
| N/A   28C    P0    51W / 225W |      0MiB /  4743MiB |     89%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

MPI

In the following example, we are using a job submitted by the command: qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I

Note

We have seen no major performance impact for a job running in a Singularity container.

With Singularity, the MPI usage model is to call mpirun from outside the container, and reference the container from your mpirun command. Usage would look like this:

$ mpirun -np 24 singularity exec container.img /path/to/contained_mpi_prog

By calling mpirun outside of the container, we solve several very complicated work-flow aspects. For example, if mpirun is called from within the container it must have a method for spawning processes on remote nodes. Historically the SSH is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in OpenMPI’s case) the Orted (Open RTE User-Level Daemon) processes on the remote system, but that then requires resource manager modification and container awareness.

In the end, we do not gain anything by calling mpirun from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).

MPI Inside Singularity Image

$ ml CentOS/6.9
$ image-shell
Singularity: Invoking an interactive shell within container...

Singularity CentOS-6.9_20180220092823.img:~> mpirun hostname | wc -l
24

As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when used inside the Singularity image.

MPI Outside Singularity Image

$ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220092823.img
$ image-mpi hostname | wc -l
48

In this case, the MPI wrapper behaves like mpirun command. The mpirun is called outside the container and the communication between nodes are propagated into the container automatically.

How to Use Own Image on Cluster?

  • Prepare the image on your computer
  • Transfer the images to your /home directory on the cluster (for example .singularity/image)
local:$ scp container.img login@login4.salomon.it4i.cz:~/.singularity/image/container.img
  • Load module Singularity (ml Singularity)
  • Use your image

Note

If you want to use the Singularity wrappers with your own images, then load module Singularity-wrappers/master and set the environment variable IMAGE_PATH_LOCAL=/path/to/container.img.

How to Edit IT4Innovations Image?

  • Transfer the image to your computer
local:$ scp login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img container.img
  • Modify the image
  • Transfer the image from your computer to your /home directory on the cluster
local:$ scp container.img login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img
  • Load module Singularity (ml Singularity)
  • Use your image

Comments