Skip to content

OpenACC MPI Tutorial

This tutorial is an excerpt from Nvidia's 5× in 5 Hours: Porting a 3D Elastic Wave Simulator to GPUs Using OpenACC tutorial. All source code for this tutorial can be downloaded as part of this tarball. SEISMIC_CPML, developed by Dimitri Komatitsch and Roland Martin from University of Pau, France, is a set of ten open-source Fortran 90 programs.


Before building and running each step, make sure that the compiler (pgfortran) and MPI wrappers (mpif90) are in your path.

Step 0: Evaluation

Before you start, you should evaluate the code to determine whether it is worth accelerating. Using the compiler flag -⁠Minfo=intensity, you can see that the average compute intensity of the various loops is between 2.5 and 2.64. As a rule, anything below 1.0 is generally not worth accelerating unless it is part of a larger program.

To build and run the original MPI/OpenMP code on your system, do the following:

cd step0
make build
make run
make verify

Step 1: Adding Setup Code

Because this is an MPI code where each process will use its own GPU, you need to add some utility code to ensure that happens. The setDevice routine first determines which node the process is on (via a call to hostid) and then gathers the hostids from all other processes. It then determines how many GPUs are available on the node and assigns the devices to each process.

Note that in order to maintain portability with the CPU version, this section of code is guarded by the preprocessor macro _OPENACC, which is defined when the OpenACC directives are enabled in the HPC Fortran compiler through the use of the -⁠acc command-line compiler option.

#ifdef _OPENACC
function setDevice(nprocs,myrank)

  use iso_c_binding
  use openacc
  implicit none
  include 'mpif.h'

    function gethostid() BIND(C)
      use iso_c_binding
      integer (C_INT) :: gethostid
    end function gethostid
  end interface

  integer :: nprocs, myrank
  integer, dimension(nprocs) :: hostids, localprocs
  integer :: hostid, ierr, numdev, mydev, i, numlocal
  integer :: setDevice

! get the hostids so we can determine what other processes are on this node
  hostid = gethostid()
  CALL mpi_allgather(hostid,1,MPI_INTEGER,hostids,1,MPI_INTEGER, &

! determine which processors are on this node
  do i=1,nprocs
    if (hostid .eq. hostids(i)) then
      numlocal = numlocal+1

! get the number of devices on this node
  numdev = acc_get_num_devices(ACC_DEVICE_NVIDIA)

  if (numdev .lt. 1) then
    print *, 'ERROR: There are no devices available on this host.  &
              ABORTING.', myrank

! print a warning if the number of devices is less then the number
! of processes on this node.  Having multiple processes share devices is not
! recommended.
  if (numdev .lt. numlocal) then
   if (localprocs(myrank+1).eq.1) then
     ! print the message only once per node
   print *, 'WARNING: The number of process is greater then the number  &
             of GPUs.', myrank
   mydev = mod(localprocs(myrank+1),numdev)
   mydev = localprocs(myrank+1)

 call acc_set_device_num(mydev,ACC_DEVICE_NVIDIA)
 call acc_init(ACC_DEVICE_NVIDIA)
 setDevice = mydev

end function setDevice

To build and run the step1 code on your system do the following:

cd step1
make build
make run
make verify

Step 2: Adding Compute Regions

Next, you add six compute regions around the eight parallel loops. For example, here's the final reduction loop.

!$acc kernels
  do k = kmin,kmax

! compute kinetic energy first, defined as 1/2 rho ||v||^2
! in principle we should use rho_half_x_half_y instead of rho for vy
! in order to interpolate density at the right location in the staggered grid
! cell but in a homogeneous medium we can safely ignore it

      total_energy_kinetic = total_energy_kinetic + 0.5d0 * rho*( &
              vx(i,j,k)**2 + vy(i,j,k)**2 + vz(i,j,k)**2)

! add potential energy, defined as 1/2 epsilon_ij sigma_ij
! in principle we should interpolate the medium parameters at the right location
! in the staggered grid cell but in a homogeneous medium we can safely ignore it

! compute total field from split components
      epsilon_xx = ((lambda + 2.d0*mu) * sigmaxx(i,j,k) - lambda *  &
      sigmayy(i,j,k) - lambda*sigmazz(i,j,k)) / (4.d0 * mu * (lambda + mu))
      epsilon_yy = ((lambda + 2.d0*mu) * sigmayy(i,j,k) - lambda *  &
          sigmaxx(i,j,k) - lambda*sigmazz(i,j,k)) / (4.d0 * mu * (lambda + mu))
      epsilon_zz = ((lambda + 2.d0*mu) * sigmazz(i,j,k) - lambda *  &
          sigmaxx(i,j,k) - lambda*sigmayy(i,j,k)) / (4.d0 * mu * (lambda + mu))
      epsilon_xy = sigmaxy(i,j,k) / (2.d0 * mu)
      epsilon_xz = sigmaxz(i,j,k) / (2.d0 * mu)
      epsilon_yz = sigmayz(i,j,k) / (2.d0 * mu)

      total_energy_potential = total_energy_potential + &
        0.5d0 * (epsilon_xx * sigmaxx(i,j,k) + epsilon_yy * sigmayy(i,j,k) + &
        epsilon_yy * sigmayy(i,j,k)+ 2.d0 * epsilon_xy * sigmaxy(i,j,k) + &
        2.d0*epsilon_xz * sigmaxz(i,j,k)+2.d0*epsilon_yz * sigmayz(i,j,k))

!$acc end kernels

The -⁠acc command line option to the HPC Accelerator Fortran compiler enables OpenACC directives. Note that OpenACC is meant to model a generic class of devices.

Another compiler option you'll want to use during development is -⁠Minfo, which provides feedback on optimizations and transformations performed on your code. For accelerator-specific information, use the -⁠Minfo=accel sub-option.

Examples of feedback messages produced when compiling SEISMIC_CPML include:

   1113, Generating copyin(vz(11:91,11:631,kmin:kmax))
         Generating copyin(vy(11:91,11:631,kmin:kmax))
         Generating copyin(vx(11:91,11:631,kmin:kmax))
         Generating copyin(sigmaxx(11:91,11:631,kmin:kmax))
         Generating copyin(sigmayy(11:91,11:631,kmin:kmax))
         Generating copyin(sigmazz(11:91,11:631,kmin:kmax))
         Generating copyin(sigmaxy(11:91,11:631,kmin:kmax))
         Generating copyin(sigmaxz(11:91,11:631,kmin:kmax))
         Generating copyin(sigmayz(11:91,11:631,kmin:kmax))

To compute on a GPU, the first step is to move data from host memory to GPU memory. In the example above, the compiler tells you that it is copying over nine arrays.

Note the copyin statements. These mean that the compiler will only copy the data to the GPU but not copy it back to the host. This is because line 1113 corresponds to the start of the reduction loop compute region, where these arrays are used but never modified.

Data movement clauses:

  • copyin - the data is copied only to the GPU;
  • copy - the data is copied to the device at the beginning of the region and copied back at the end of the region;
  • copyout - the data is only copied back to the host.

The compiler is conservative and only copies the data that's actually required to perform the necessary computations. Unfortunately, because the interior sub-arrays are not contiguous in host memory, the compiler needs to generate multiple data transfers for each array.

   1114, Loop is parallelizable
   1115, Loop is parallelizable
   1116, Loop is parallelizable
         Accelerator kernel generated

Here the compiler has performed dependence analysis on the loops at lines 1114, 1115, and 1116 (the reduction loop shown earlier). It finds that all three loops are parallelizable so it generates an accelerator kernel.

The compiler may attempt to work around dependences that prevent parallelization by interchanging loops (i.e changing the order) where it's safe to do so. At least one outer or interchanged loop must be parallel for an accelerator kernel to be generated.

How the threads are organized is called the loop schedule. Below you can see the loop schedule for our reduction loop. The do loops have been replaced with a three-dimensional gang, which in turn is composed of a two-dimensional vector section.

       1114, !$acc loop gang ! blockidx%y
       1115, !$acc loop gang, vector(4) ! blockidx%z threadidx%y
       1116, !$acc loop gang, vector(32) ! blockidx%x threadidx%x

In CUDA terminology, the gang clause corresponds to a grid dimension and the vector clause corresponds to a thread block dimension.

So here we have a 3-D array that's being grouped into blocks of 32×4 elements where a single thread is working on a specific element. Because the number of gangs is not specified in the loop schedule, it will be determined dynamically when the kernel is launched. If the gang clause had a fixed width, such as gang(16), then each kernel would be written to loop over multiple elements.

With CUDA, programming reductions and managing shared memory can be a fairly difficult task. In the example below, the compiler has automatically generated optimal code using these features.

       1122, Sum reduction generated for total_energy_kinetic
       1140, Sum reduction generated for total_energy_potential

To build and run the step2 code on your system do the following:

cd step2
make build
make run
make verify

Step 3: Adding Data Regions


Set the environment variable PGI_ACC_TIME=1 and run your executable. This option prints basic profile information such as the kernel execution time, data transfer time, initialization time, the actual launch configuration, and total time spent in a compute region. Note that the total time is measured from the host and includes time spent executing host code within a region.

To improve performance, you should minimize the amount of time transferring data, i.e. the data directive. You can use a data region to specify exact points in your program where data should be copied from host memory to GPU memory, and back again. Any compute region enclosed within a data region will use the previously copied data, without the need to copy at the boundaries of the compute region. A data region can span across host code and multiple compute regions, and even across subroutine boundaries.

In looking at the arrays in SEISMIC_CMPL, there are 18 arrays with constant values. Another 21 are used only within compute regions so are never needed on the host. Let's start by adding a data region around the outer time step loop. The final three arrays do need to be copied back to the host to pass their halos.

For those cases, we use the update directive.

!---  beginning of time loop
!$acc data &
!$acc copyin(a_x_half,b_x_half,k_x_half,                       &
!$acc        a_y_half,b_y_half,k_y_half,                       &
!$acc        a_z_half,b_z_half,k_z_half,                       &
!$acc        a_x,a_y,a_z,b_x,b_y,b_z,k_x,k_y,k_z,              &
!$acc        sigmaxx,sigmaxz,sigmaxy,sigmayy,sigmayz,sigmazz,  &
!$acc        memory_dvx_dx,memory_dvy_dx,memory_dvz_dx,        &
!$acc        memory_dvx_dy,memory_dvy_dy,memory_dvz_dy,        &
!$acc        memory_dvx_dz,memory_dvy_dz,memory_dvz_dz,        &
!$acc        memory_dsigmaxx_dx, memory_dsigmaxy_dy,           &
!$acc        memory_dsigmaxz_dz, memory_dsigmaxy_dx,           &
!$acc        memory_dsigmaxz_dx, memory_dsigmayz_dy,           &
!$acc        memory_dsigmayy_dy, memory_dsigmayz_dz,           &
!$acc        memory_dsigmazz_dz)

  do it = 1,NSTEP


!$acc update host(sigmazz,sigmayz,sigmaxz)
! sigmazz(k+1), left shift
  call MPI_SENDRECV(sigmazz(:,:,1),number_of_values,MPI_DOUBLE_PRECISION, &
         receiver_left_shift,message_tag,sigmazz(:,:,NZ_LOCAL+1), &


!$acc update device(sigmazz,sigmayz,sigmaxz)


  ! --- end of time loop
!$acc end data

Data regions can be nested, and in fact we used this feature in the time loop body for the arrays vx, vy and vz as shown below. While these arrays are copied back and forth at the inner data region boundary, and so are moved more often than the arrays moved in the outer data region, they are used across multiple compute regions instead of being copied at each compute region boundary.

Note that we do not specify any array dimensions in the copy clause. This instructs the compiler to copy each array in its entirety as a contiguous block, and eliminates the inefficiency we noted earlier when interior sub-arrays were being copied in multiple blocks.

!$acc data copy(vx,vy,vz)

... data region spans over 5 compute regions and host code

!$acc kernels


!$acc end kernels

!$acc end data

To build and run the step3 code on your system do the following:

cd step3
make build
make run
make verify

Step 4: Optimizing Data Transfers

The next steps further optimizes the data transfers by migrating as much of the computation as we can over to the GPU and moving only the absolute minimum amount of data required. The first step is to move the start of the outer data region up so that it occurs earlier in the code, and to put the data initialization loops into compute kernels. This includes the vx, vy, and vz arrays. This approach enables you to remove the inner data region used in the previous optimization step.

In the following example code, notice the use of the create clause. This instructs the compiler to allocate space for variables in GPU memory for local use but to perform no data movement on those variables. Essentially they are used as scratch variables in GPU memory.

!$acc data                                                     &
!$acc copyin(a_x_half,b_x_half,k_x_half,                       &
!$acc        a_y_half,b_y_half,k_y_half,                       &
!$acc        a_z_half,b_z_half,k_z_half,                       &
!$acc        ix_rec,iy_rec,                                    &
!$acc        a_x,a_y,a_z,b_x,b_y,b_z,k_x,k_y,k_z),             &
!$acc copyout(sisvx,sisvy),                                    &
!$acc create(memory_dvx_dx,memory_dvy_dx,memory_dvz_dx,        &
!$acc        memory_dvx_dy,memory_dvy_dy,memory_dvz_dy,        &
!$acc        memory_dvx_dz,memory_dvy_dz,memory_dvz_dz,        &
!$acc        memory_dsigmaxx_dx, memory_dsigmaxy_dy,           &
!$acc        memory_dsigmaxz_dz, memory_dsigmaxy_dx,           &
!$acc        memory_dsigmaxz_dx, memory_dsigmayz_dy,           &
!$acc        memory_dsigmayy_dy, memory_dsigmayz_dz,           &
!$acc        memory_dsigmazz_dz,                               &
!$acc        vx,vy,vz,vx1,vy1,vz1,vx2,vy2,vz2,                 &
!$acc        sigmazz1,sigmaxz1,sigmayz1,                       &
!$acc        sigmazz2,sigmaxz2,sigmayz2)                       &
!$acc copyin(sigmaxx,sigmaxz,sigmaxy,sigmayy,sigmayz,sigmazz)


! Initialize vx, vy and vz arrays on the device
!$acc kernels
  vx(:,:,:) = ZERO
  vy(:,:,:) = ZERO
  vz(:,:,:) = ZERO
!$acc end kernels


One caveat to using data regions is that you must be aware of which copy (host or device) of the data you are actually using in a given loop or computation. For example, any update to the copy of a variable in device memory won't be reflected in the host copy until you specified using either an update directive or a copy clause at a data or compute region boundary.


Unintentional loss of coherence between the host and device copy of a variable is one of the most common causes of validation errors in OpenACC programs.

After making the above change to SEISMIC_CPML, the code generated incorrect results. After debugging, it was determined that the section of the time step loop that initializes boundary conditions was omitted from an OpenACC compute region. As a result, we were initializing the host copy of the data, rather than the device copy as intended, which resulted in uninitialized variables in device memory.

The next challenge in optimizing the data transfers related to the handling of the halo regions. SEISMIC_CPML passes halos from six 3-D arrays between MPI processes during the course of the computations.

After some experimentation, we settled on an approach whereby we added six new temporary 2-D arrays to hold the halo data. Within a compute region we gathered the 2-D halos from the main 3-D arrays into the new temp arrays, copied the temporaries back to the host in one contiguous block, passed the halos between MPI processes, and finally copied the exchanged values back to device memory and scattered the halos back into the 3-D arrays. While this approach does add to the kernel execution time, it saves a considerable amount of data transfer time.

In the example code below, note that the source code added to support the halo gathers and transfers is guarded by the preprocessor _OPENACC macro and will only be executed if the code is compiled by an OpenACC-enabled compiler.

#ifdef _OPENACC
! Gather the sigma 3D arrays to a 2D slice to allow for faster
! copy from the device to host
!$acc kernels
   do i=1,NX
    do j=1,NY
!$acc end kernels
!$acc update host(vxl,vyl,vzl)

! vx(k+1), left shift
  call MPI_SENDRECV(vx1(:,:), number_of_values, MPI_DOUBLE_PRECISION, &
       receiver_left_shift, message_tag, vx2(:,:), number_of_values, &
       MPI_DOUBLE_PRECISION, sender_left_shift, message_tag, MPI_COMM_WORLD,&
       message_status, code)

! vy(k+1), left shift
  call MPI_SENDRECV(vy1(:,:), number_of_values, MPI_DOUBLE_PRECISION, &
       receiver_left_shift,message_tag, vy2(:,:),number_of_values,   &
       MPI_DOUBLE_PRECISION, sender_left_shift, message_tag, MPI_COMM_WORLD,&
       message_status, code)

! vz(k-1), right shift
  call MPI_SENDRECV(vz1(:,:), number_of_values, MPI_DOUBLE_PRECISION, &
       receiver_right_shift, message_tag, vz2(:,:), number_of_values, &
       MPI_DOUBLE_PRECISION, sender_right_shift, message_tag, MPI_COMM_WORLD, &
       message_status, code)

!$acc update device(vx2,vy2,vz2)
!$acc kernels
  do i=1,NX
    do j=1,NY
!$acc end kernels


To build and run the step4 code on your system do the following:

cd step4
make build
make run
make verify

Step 5: Loop Schedule Tuning

The final step is to tune the OpenACC compute region loop schedules using the gang, worker, and vector clauses. The default kernel schedules chosen by the NVIDIA OpenACC compiler are usually quite good. Manual tuning efforts often don't improve timings significantly, but it's always worthwhile to spend a little time examining whether you can do better by overriding compiler-generated loop schedules using explicit loop scheduling clauses. You can usually tell fairly quickly if the clauses are having an effect.

Note that there is no well-defined method for finding an optimal kernel schedule. The best advice is to start with the compiler's default schedule and try small adjustments to see if and how they affect execution time. The kernel schedule you choose will affect whether and how shared memory is used, global array accesses, and various types of optimizations. Typically, it's better to perform gang scheduling of loops with large iteration counts.

!$acc loop gang
  do k = k2begin,NZ_LOCAL
    kglobal = k + offset_k
!$acc loop worker vector collapse(2)
    do j = 2,NY
      do i = 2,NX

To build and run the step5 code on your system do the following:

cd step5
make build
make run
make verify