Skip to content

Warning

This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.

Parallel Runs Setting on Karolina

Important aspect of each parallel application is correct placement of MPI processes or threads to available hardware resources. Since incorrect settings can cause significant degradation of performance, all users should be familiar with basic principles explained below.

At the beginning, a basic hardware overview is provided, since it influences settings of mpirun command. Then placement is explained for major MPI implementations Intel MPI and OpenMPI. The last section describes an appropriate placement for memory bound and compute bound applications.

Hardware Overview

Karolina contains several types of nodes. This documentation contains description of basic hardware structure of universal and accelerated nodes. More technical details can be found in this presentation.

Universal Nodes

  • 720 x 2 x AMD 7H12, 64 cores, 2,6 GHz
universal
node
socket 0
AMD 7H12
NUMA 0 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
NUMA 1 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
NUMA 2 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
NUMA 3 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
socket 1
AMD 7H12
NUMA 4 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
NUMA 5 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
NUMA 6 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)
NUMA 7 2 x ch DDR4-3200 4 x 16MB L3 16 cores (4 cores / L3)

Accelerated Nodes

  • 72 x 2 x AMD 7763, 64 cores, 2,45 GHz
  • 72 x 8 x NVIDIA A100 GPU
accelerated
node
socket 0
AMD 7763
NUMA 0 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3)
NUMA 1 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3) 2 x A100
NUMA 2 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3)
NUMA 3 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3) 2 x A100
socket 1
AMD 7763
NUMA 4 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3)
NUMA 5 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3) 2 x A100
NUMA 6 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3)
NUMA 7 2 x ch DDR4-3200 2 x 32MB L3 16 cores (8 cores / L3) 2 x A100

Assigning Processes / Threads to Particular Hardware

When an application is started, the operating system maps MPI processes and threads to particular cores. This mapping is not fixed as the system is allowed to move your application to other cores. Inappropriate mapping or frequent moving can lead to significant degradation of performance of your application. Hence, a user should:

  • set mapping according to their application needs;
  • pin the application to particular hardware resources.

Settings can be described by environment variables that are briefly described on HPC wiki. However, the mapping and pining is highly non-portable. It is dependent on a particular system and used MPI library. The following sections describe settings for the Karolina cluster.

The number of MPI processes per node should be set by PBS via the qsub command. Mapping and pinning are set for Intel MPI and Open MPI differently.

Open MPI

In the case of Open MPI, mapping can be set by the parameter --map-by. Pinning can be set by the parameter --bind-to. The list of all available options can be found here.

The most relevant options are:

  • bind-to: core, l3cache, numa, socket
  • map-by: core, l3cache, numa, socket, slot

Mapping and pinning to, for example, L3 cache can be set by the mpirun command in the following way:

mpirun -n 32 --map-by l3cache --bind-to l3cache ./app

Both parameters can be also set by environment variables:

export OMPI_MCA_rmaps_base_mapping_policy=l3cache
export OMPI_MCA_hwloc_base_binding_policy=l3cache
mpirun -n 32 ./app

Intel MPI

In the case of Intel MPI, mapping and pinning can be set by environment variables that are described on Intel's Developer Reference. The most important variable is I_MPI_PIN_DOMAIN. It denotes the number of cores allocated for each MPI process and specifies both mapping and pinning.

Default setting is I_MPI_PIN_DOMAIN=auto:compact. It computes the number of cores allocated to each MPI process from the number of available cores and requested number of MPI processes (total cores / requested MPI processes). It is usually the optimal settings and majority applications can be run with the simple mpirun -n N ./app command, where N denotes the number of MPI processes.

Examples of Placement to Different Hardware

Let us have a job allocated by the following qsub:

qsub -lselect=2,nprocs=128,mpiprocs=4,ompthreads=4

Then the following table shows placement of app started with 8 MPI processes on the universal node for various mapping and pining:

Open MPI Intel MPI node 0 1
map-by bind-to I_MPI_PIN_DOMAIN rank 0 1 2 3 4 5 6 7
socket socket socket socket 0 1 0 1 0 1 0 1
numa 0-3 4-7 0-3 4-7 0-3 4-7 0-3 4-7
cores 0-63 64-127 0-63 64-127 0-63 64-127 0-63 64-127
numa numa numa socket 0 0
numa 0 1 2 3 0 1 2 3
cores 0-15 16-31 32-47 48-63 0-15 16-31 32-47 48-63
l3cache l3cache cache3 socket 0 0
numa 0 0
cores 0-3 4-7 8-11 12-15 0-3 4-7 8-11 12-15
slot:pe=32 core 32 socket 0 1 0 1
numa 0-1 2-3 4-5 6-7 0-1 2-3 4-5 6-7
cores 0-31 32-63 64-95 96-127 0-31 32-63 64-95 96-127

We can see from the above table that mapping starts from the first node. When the first node is fully occupied (according to the number of MPI processes per node specified by qsub), mapping continues to the second node, etc.

We note that in the case of --map-by numa and --map-by l3cache, the application is not spawned across whole node. For utilization of a whole node, more MPI processes per node should be used. In addition, I_MPI_PIN_DOMAIN=cache3 maps processes incorrectly.

The last mapping (--map-by slot:pe=32 or I_MPI_PIN_DOMAIN=32) is the most general one. In this way, a user can directly specify the number of cores for each MPI process independently to a hardware specification.

Memory Bound Applications

The performance of memory bound applications is dependent on throughput to the memory. Hence, it is optimal to use the number of cores equal to the number of memory channels; i.e., 16 cores per node (see the tables with the hardware description at the top of this document). Running your memory bound application on more than 16 cores can cause lower performance.

Two MPI processes to each NUMA domain must be assigned in order to fully utilize bandwidth to the memory. It can be achieved by the following commands (for a single node):

  • Intel MPI: mpirun -n 16 ./app
  • Open MPI: mpirun -n 16 --map-by slot:pe=8 ./app

Intel MPI automatically puts MPI processes to each 8th core. In the case of Open MPI, parameter --map-by must be used. Required mapping can be achieved, for example by --map-by slot:pe=8 that maps MPI processes to each 8-th core (in the same way as Intel MPI). This mapping also assures that each MPI process will be assigned to different L3 cache.

Compute Bound Applications

For compute bound applications it is optimal to use as much cores as possible; i.e. 128 cores per node. The following command can be used:

  • Intel MPI: mpirun -n 128 ./app
  • Open MPI: mpirun -n 128 --map-by core --bind-to core ./app

Pinning assures that operating system does not migrate MPI processes among cores.

Finding Optimal Setting for Your Application

Sometimes it is not clear what the best settings for your application is. In that case, you should test your application with a different number of MPI processes. A good practice is to test your application with 16-128 MPI per node and measure the time required to finish the computation.

With Intel MPI, it is enough to start your application with a required number of MPI processes. For Open MPI, you can specify mapping in the following way:

mpirun -n  16 --map-by slot:pe=8 --bind-to core ./app
mpirun -n  32 --map-by slot:pe=4 --bind-to core ./app
mpirun -n  64 --map-by slot:pe=2 --bind-to core ./app
mpirun -n 128 --map-by core      --bind-to core ./app