Skip to content



Scalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes.

Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications.

Installed Versions

For the current list of installed versions, use:

$ ml av Scalasca


Profiling a parallel application with Scalasca consists of three steps:

  1. Instrumentation, compiling the application such way, that the profiling data can be generated.
  2. Runtime measurement, running the application with the Scalasca profiler to collect performance data.
  3. Analysis of reports


Instrumentation via scalasca -instrument is discouraged. Use Score-P instrumentation.

Runtime Measurement

After the application is instrumented, runtime measurement can be performed with the scalasca -analyze command. The syntax is:

scalasca -analyze [scalasca options] [launcher] [launcher options] [program] [program options]

An example:

$ scalasca -analyze mpirun -np 4 ./mympiprogram

Some notable Scalasca options are:

  • -t enables trace data collection. By default, only summary data are collected.
  • -e <directory> specifies a directory to which the collected data is saved. By default, Scalasca saves the data to a directory with the scorep_ prefix, followed by the name of the executable and the launch configuration.


Scalasca can generate a huge amount of data, especially if tracing is enabled. Consider saving the data to a scratch directory.

Analysis of Reports

For the analysis, you must have the Score-P and CUBE modules loaded. The analysis is done in two steps. First, the data is preprocessed and then, the CUBE GUI tool is launched.

To launch the analysis, run:

$ scalasca -examine [options] <experiment_directory>

If you do not wish to launch the GUI tool, use the -s option:

$ scalasca -examine -s <experiment_directory>

Alternatively, you can open CUBE and load the data directly from here. Keep in mind that in this case, the pre-processing is not done and not all metrics will be shown in the viewer.

Refer to the CUBE documentation on usage of the GUI viewer.