Skip to content

Karolina Compilation

Since Karolina's nodes are equipped with AMD Zen 2 and Zen 3 processors, we recommend to follow these instructions in order to avoid degraded performance when compiling your code:

1. Select Compilers Flags

When compiling your code, it is important to select right compiler flags; otherwise, the code will not be SIMD vectorized, resulting in severely degraded performance. Depending on the compiler, you should use these flags:

Important

-Ofast optimization may result in unpredictable behavior (e.g. a floating point overflow).

Compiler Module Command Flags
AOCC ml AOCC clang -O3 -mavx2 -march=znver2
INTEL ml intel icc -O3 -xCORE-AVX2
GCC ml GCC gcc -O3 -mavx2

The compiler flags and the resulting compiler performance may be verified with our benchmark, see [Lorenz Compiler performance benchmark][a].

2. Use BLAS Library

It is important to use the BLAS library that performs well on AMD processors. To combine the optimizations for the general CPU code and have the most efficient BLAS routines we recommend the combination of lastest Intel Compiler suite, with Cray's Scientific Library bundle (LibSci). When using the Intel Compiler suite includes also support for efficient MPI implementation utilizing Intel MPI library over the Infiniband interconnect.

For the compilation as well for the runtime of compiled code use:

ml PrgEnv-intel
ml cray-pmi/6.1.14

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CRAY_LD_LIBRARY_PATH:$CRAY_LIBSCI_PREFIX_DIR/lib:/opt/cray/pals/1.3.2/lib

There are usually two standard situation how to compile and run the code

OpenMP Without MPI

To compile the code against the LibSci, without MPI, but still enabling OpenMP run over multiple cores use:

icx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SOURCE_CODE.c  -lsci_intel_mp

To run the resulting binary use:

OMP_NUM_THREADS=128 OMP_PROC_BIND=true BINARY.x

This enables effective run over all 128 cores available on a single Karlina compute node.

OpenMP With MPI

To compile the code against the LibSci, with MPI, use:

mpiicx -qopenmp -L$CRAY_LIBSCI_PREFIX_DIR/lib -I$CRAY_LIBSCI_PREFIX_DIR/include -o BINARY.x SOURCE_CODE.c  -lsci_intel_mp -lsci_intel_mpi_mp

To run the resulting binary use:

OMP_NUM_THREADS=64 OMP_PROC_BIND=true mpirun -n 2 ${HOME}/BINARY.x

This example runs the BINARY.x, placed in ${HOME} as 2 MPI processes, each using 64 cores of a single socket of a single node.

Another example would be to run a job on 2 full nodes, utilizing 128 cores on each (so 256 cores in total) and letting the LibSci efficiently placing the BLAS routines across the allocated CPU sockets:

OMP_NUM_THREADS=128 OMP_PROC_BIND=true mpirun -n 2 ${HOME}/BINARY.x

This assumes you have allocated 2 full nodes on Karolina using SLURM's directives, e. g. in a submission script:

#SBATCH --nodes 2
#SBATCH --ntasks-per-node 128

Don't forget before the run to ensure you have the correct modules and loaded and that you have set up the LD_LIBRARY_PATH environment variable set as shown above (e.g. part of your submission script for SLURM).

Note

Most MPI libraries do the binding automatically. The binding of MPI ranks can be inspected for any MPI by running $ mpirun -n num_of_ranks numactl --show. However, if the ranks spawn threads, binding of these threads should be done via the environment variables described above.

The choice of BLAS library and its performance may be verified with our benchmark, see Lorenz BLAS performance benchmark.