Installation

Downloading NVSHMEM

Download and extract the NVSHMEM txz archive from https://developer.download.nvidia.com/compute/redist/nvshmem/version-number/source (for example, https://developer.download.nvidia.com/compute/redist/nvshmem/2.11.0/source/). The extracted directory contains the following files and subdirectories:

File or Directory

Description

src/

Contains NVSHMEM sources and headers.

perftest/

Contains tests showing use of NVSHMEM APIs with performance reporting.

examples/

Contains examples showing use of some common use cases of NVSHMEM.

scripts/

Contains helper scripts, for example, the script to download, build, and install Hydra.

changelog

Change history for the repository.

COPYRIGHT.txt

Copyright information.

NVSHMEM-SLA.txt

NVSHMEM Software License Agreement (SLA).

Building And Installing NVSHMEM

  1. Set the CUDA_HOME environment variable to point to the CUDA Toolkit.

  2. Set the GDRCOPY_HOME environment variable to point to the GDRCopy installation.

    To build without GDRCopy, set the environmental variable to NVSHMEM_USE_GDRCOPY=0.

    Note

    Without GDRCopy, atomics are only supported across NVLink connections.

  3. If MPI and/or SHMEM support is required, set NVSHMEM_MPI_SUPPORT=1 and/or NVSHMEM_SHMEM_SUPPORT=1.

  4. Set the MPI_HOME and SHMEM_HOME environment variables to point to the MPI and OpenSHMEM installations, respectively.

  5. By default, the location of mpicc that is used during NVSHMEM compilation is set to $MPI_HOME/bin/mpicc.

    This location can be overridden by specifying MPICC=<path/to/mpicc> in the environment.

    Note

    • When using Open MPI and OSHMEM, the paths are the same.

    • To use OSHMEM, Open MPI needs to be built with UCX support.

    • NVSHMEM has been tested with Open MPI 4.0.1 and UCX 1.10.

    • Other MPI and OpenSHMEM installations should work.

    • By default, MPI support is enabled, and OpenSHMEM support is disabled.

  6. Optional: To enable UCX support, set NVSHMEM_UCX_SUPPORT=1 and UCX_HOME to the installed UCX directory.

  7. Optional: The enable libfabric support, set NVSHMEM_LIBFABRIC_SUPPORT=1 and LIBFABRIC_HOME to the installed libfabric directory.

  8. Optional: To enable NCCL support, set NVSHMEM_USE_NCCL=1 andNCCL_HOME to the installed NCCL directory.

  9. Optional: To enable the InfiniBand GPUDirect Async (IBGDA) transport, set NVSHMEM_IBGDA_SUPPORT=1

  10. Optional: To enable PMIx support, set NVSHMEM_PMIX_SUPPORT=1 andPMIX_HOME to the installed PMIx directory.

  11. Configure the default bootstrap:

    • The PMI bootstrap method can be selected by using the NVSHMEM_BOOTSTRAP_PMI environment variable at runtime.

      PMI-1, which can be used with the Hydra launcher, is the default PMI standard that is used by NVSHMEM.

    • To select PMIx as the default PMI interface, set NVSHMEM_DEFAULT_PMIX=1.

    • To select PMI-2 as the default PMI interface, set NVSHMEM_DEFAULT_PMI2=1.

  12. Set NVSHMEM_PREFIX to specify the location where NVSHMEM will be installed.

  13. To build and install the library, run cmake and make -j install.

Using the NVSHMEM cmake build system

NVSHMEM now only supports building with cmake version 3.19 or later

Using NVSHMEM now requires you to build with cmake version 3.19 or later. Refer to the following sections for the comparison between the deprecated and removed make build system.

The cmake build system is backwards compatible with the environment variables used in the original Makefile. That is to say that the same environment will produce a comparable build whether make or cmake is used for the build.

Cmake natively supports some environment and cmake variables for facilitating discovery of NVSHMEM dependencies (e.g. MPI and CUDA). These native settings can be used within the context of NVSHMEM, but describing them is outside the scope of this document.

Additionally, with the exception of NVSHMEM_HOME (which was superseded by NVSHMEM_PREFIX for the install prefix) all previous environment variables are respected when passed as cmake variables.

The steps outlined below describe typical build steps for NVSHMEM when using cmake:

  1. If setting build configurations through the environment, follow steps 1-12 as listed in section 3.2 above.

  2. To create the makefiles for building NVSHMEM call:

    cmake [-D{VAR_NAME_1={VAR_VAL_1}} ...-D{VAR_NAME_N={VAR_VAL_N}} -S . -B {PATH_TO_BUILD_DIR}
    
  3. Where VAR_NAME_X can be any previously accepted environment variable or native cmake variable. Note: If the variables are already set in the environment, it is not necessary to set them again on the command line. If variables are set on both the command line and the environment, the command line will supersede the environment. To make NVSHMEM, cd {PATH_TO_BUILD_DIR} && make [-j] [install]

  4. Note: CMAKE does not respect NVCC_GENCODE. Instead, use the cmake variable CUDA_ARCHITECTURE. As an example, set cmake -DCUDA_ARCHITECTURES=”70;80” to compile for Volta and Ampere GPUs.

  5. When building with CMAKE from the source packages, perftests and examples can be disabled with the CMAKE by setting variables NVSHMEM_BUILD_TESTS and NVSHMEM_BUILD_EXAMPLES, respectively, to 0.

  6. Binary packages can be built from the source package setting the CMake variable NVSHMEM_BUILD_PACKAGES to 1.

    Users also have granular control over whether to produce RPM or DEB files with the CMake variables NVSHMEM_BUILD_DEB_PACKAGE and NVSHMEM_BUILD_RPM_PACKAGE. These are set to 1 by default.

Using NVSHMEM In Your Applications

Launching NVSHMEM Programs

NVSHMEM supports the following methods to launch your application:

  • Using a PMI-1 compatible launcher, such as Hydra.

  • Using a PMI-2 compatible launcher, such as Slurm.

  • Using a PMIx compatible launcher, such as Slurm or Open MPI mpirun.

  • Launching as part of an existing MPI application.

  • Launching as part of an existing OpenSHMEM application.

The PMI-1 and PMI-2 clients are in NVSHMEM and are automatically built as part of the build process. A PMIx client must be provided by the user by installing Open PMIx or by using the PMIx client that is installed by Open MPI or Slurm. When you build Open MPI, include the --enable-install-libpmix configure option. When you build NVSHMEM, set NVSHMEM_PMIX_SUPPORT=1 and PMIX_HOME=/path/to/openmpi.

To select the correct PMI library at runtime, set NVSHMEM_BOOTSTRAP_PMI to PMI, PMI-2, or PMIx. To bootstrap NVSHMEM by using MPI or OpenSHMEM, start the application in the typical way, start MPI or OpenSHMEM, and then call the nvshmemx_init_attr function to inform NVSHMEM that NVSHMEM is running as part of an existing MPI or OpenSHMEM job.

Using NVSHMEM with Multiple Processes-Per-GPU

Starting with release 2.5.0, NVSHMEM supports multiple processes per GPU (MPG), which does not require additional configuration and can be run with or without the CUDA Multi-process Service (MPS) enabled.

If MPS is not enabled, however, only the following APIs are supported:

  • Point-to-point RMA

  • nvshmem_barrier_all() host

  • nvshmemx_barrier_all_on_stream()

  • nvshmem_sync_all() host

  • nvshmemx_sync_all_on_stream()

To enable complete NVSHMEM MPG support, the NVIDIA MPS server must be installed and be running on the system. To enable support for the complete API, the MPS server must also be configured to place a limit on the total GPU utilization of a maximum of 100%.

The NVSHMEM library will automatically detect when it runs on a system with more processes than GPUs and fan out the processes accordingly. It also automatically detects the presence of the MPS server daemon and GPU utilization configuration and enables the APIs accordingly. If an unsupported API is used in a limited MPG run, an error message will be printed, and the application will exit.

Using NVSHMEM With Your C or C++ Program

  1. Include nvshmem.h and nvshmemx.h from include/.

  2. Point to the include/ and lib/ paths.

  3. NVSHMEM users: If your C or C++ program only uses NVSHMEM, install Hydra Process Manager using the install_hydra.sh bash script under the scripts/ directory.

    1. Provide the download and install location as arguments, for example:

      ./install_hydra.sh <download_path> <install_path>
      
    2. To run the NVSHMEM job, use nvshmrun launcher, which is located under bin/ in the Hydra install path.

Using NVSHMEM With Your MPI or OpenSHMEM Program

Note

The only currently tested MPI library is Open MPI, but any standard compliant MPI library should work.

To run a Hybrid MPI + NVSHMEM program, use the mpirun launcher in the MPI installation.

Similarly, NVSHMEM can be used from OpenSHMEM programs, and you must use the corresponding launcher for the OpenSHMEM library. The only currently tested OpenSHMEM version is OSHMEM in Open MPI. Other OpenSHMEM implementations, such as Sandia OpenSHMEM (SOS) should also work, but these implementations have not been tested. To run the hybrid OpenSHMEM/NVSHMEM job, use the oshrun launcher in the OpenMPI installation or follow the launcher specification of your OpenSHMEM library.

NVSHMEM relies on a plug-in system for bootstrapping. By default, an MPI bootstrap plug-in is built for NVSHMEM and is installed in $(NVSHMEM_HOME)/lib. If this directory is not in your dynamic linker search path, you might need to add it to $LD_LIBRARY_PATH. This MPI plug-in is selected automatically at runtime if the nvshmemx_init_attr initialization function is used to request the MPI bootstrap, or if NVSHMEM_BOOTSTRAP=”MPI” is set.

The source code of the MPI bootstrap plug-in is installed in $(NVSHMEM_HOME)/share/nvshmem/src/bootstrap-plugins and can be built separately from the NVSHMEM library (for example, to support additional MPI libraries). Custom bootstrap plugins are also possible and should implement the interface that is defined in $(NVSHMEM_HOME)/include/nvshmem_bootstrap.h. Plug-ins must be built as relocatable shared objects.

After the external plug-in library is built, it can be specified to NVSHMEM at runtime by specifying NVSHMEM_BOOTSTRAP=”plugin” and NVSHMEM_BOOTSTRAP_PLUGIN=”[name of plugin]”. For example, NVSHMEM_BOOTSTRAP=”MPI” is equal to NVSHMEM_BOOTSTRAP=”plugin” and NVSHMEM_BOOTSTRAP_PLUGIN=”nvshmem_bootstrap_mpi.so”.

Running Performance Tests

Before you can run performance tests, you first must build them.

  1. If the NVSHMEM library was built with NVSHMEM_MPI_SUPPORT=1, set the CUDA_HOME, NVSHMEM_HOME and MPI_HOME environment variables to build NVSHMEM performance tests:

    CUDA_HOME=<path to supported CUDA installation>
    NVSHMEM_HOME=<path to directory where NVSHMEM is installed>
    MPI_HOME=<path to MPI installation>
    

    If you built NVSHMEM with MPI and OpenSHMEM support (NVSHMEM_MPI_SUPPORT=1 and NVSHMEM_SHMEM_SUPPORT=1) when you build perftest/, MPI and OpenSHMEM support must be enabled.

    Build without SHMEM interoperability: To build NVSHMEM performance tests without SHMEM interoperability, set the environment variable NVSHMEM_SHMEM_SUPPORT to 0. By default, performance tests are installed under perftest/perftest_install. To install to a different path, set NVSHMEM_PERFTEST_INSTALL to point to the correct path.

  2. Update LD_LIBRARY_PATH to point to $CUDA_HOME/lib64, $MPI_HOME/lib, and $NVSHMEM_HOME/lib.

  3. Assuming Hydra is installed under HYDRA_HOME, run performance tests as NVSHMEM jobs, hybrid MPI+NVSHMEM jobs, or hybrid OpenSHMEM+NVSHMEM jobs with the following commands (using perftest/device/pt-to-pt/put.cu as an example):

NVSHMEM job using Hydra (PMI-1)

$HYDRA_HOME/bin/nvshmrun -n <up to number of P2P or InfiniBand
NIC accessible GPUs>
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw

NVSHMEM job using Slurm

srun -n <up to number of P2P or InfiniBand NIC accessible GPUs>
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw

Note

When Slurm was built with a PMI that does not match the default of NVSHMEM, for example, if Slurm was built with PMIx support and NVSHMEM_DEFAULT_PMIX=1 was not set when building NVSHMEM, NVSHMEM_BOOTSTRAP_PMI can be used to override the default. Possible values are PMIX, PMI-2, and PMI. The Slurm --mpi= option to srun can be used to tell Slurm which PMI interface to use.

Hybrid MPI/NVSHMEM job

$MPI_HOME/bin/mpirun -n <up to number of GPUs accessible by P2P
or InfiniBand NIC> -x NVSHMEMTEST_USE_MPI_LAUNCHER=1
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw

Hybrid OpenSHMEM/NVSHMEM job

$MPI_HOME/bin/oshrun -n <up to number of GPUs accessible by P2P
or InfiniBand NIC> -x USE_SHMEM_IN_TEST=1
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw

“Hello World” Example

  1. Save the following code as nvshmemHelloWorld.cu:

    #include <stdio.h>
    #include <cuda.h>
    #include <nvshmem.h>
    #include <nvshmemx.h>
    
    __global__ void simple_shift(int *destination) {
        int mype = nvshmem_my_pe();
        int npes = nvshmem_n_pes();
        int peer = (mype + 1) % npes;
    
        nvshmem_int_p(destination, mype, peer);
    }
    
    int main(void) {
        int mype_node, msg;
        cudaStream_t stream;
    
        nvshmem_init();
        mype_node = nvshmem_team_my_pe(NVSHMEMX_TEAM_NODE);
        cudaSetDevice(mype_node);
        cudaStreamCreate(&stream);
    
        int *destination = (int *) nvshmem_malloc(sizeof(int));
    
        simple_shift<<<1, 1, 0, stream>>>(destination);
        nvshmemx_barrier_all_on_stream(stream);
        cudaMemcpyAsync(&msg, destination, sizeof(int), cudaMemcpyDeviceToHost, stream);
    
        cudaStreamSynchronize(stream);
        printf("%d: received message %d\n", nvshmem_my_pe(), msg);
    
        nvshmem_free(destination);
        nvshmem_finalize();
        return 0;
    }
    
  2. Build nvshmemHelloWorld.cu with the following command:

    When using dynamic linking:

    nvcc -rdc=true -ccbin g++ -gencode=$NVCC_GENCODE -I
    $NVSHMEM_HOME/include nvshmemHelloWorld.cu -o
    nvshmemHelloWorld.out -L $NVSHMEM_HOME/lib -lnvshmem_host -lnvshmem_device
    

    When using static linking:

    nvcc -rdc=true -ccbin g++ -gencode=$NVCC_GENCODE -I
    $NVSHMEM_HOME/include nvshmemHelloWorld.cu -o
    nvshmemHelloWorld.out -L $NVSHMEM_HOME/lib -lnvshmem -lnvidia-ml -lcuda -lcudart
    

    Where arch=compute_70,code=sm_70 is the value of NVCC_GENCODE for V100 GPUs.

  3. Run the nvshmemHelloWorld sample with one of the following commands:

    When running on one host with two GPUs (connected by PCI-E, NVLink or Infiniband):

    $HYDRA_HOME/bin/nvshmrun -n 2 -ppn 2 ./nvshmemHelloWorld.out
    

    When running on two hosts with one GPU per host that is connected by InfiniBand:

    $HYDRA_HOME/bin/nvshmrun -n 2 -ppn 1 –-hosts hostname1,hostname2 ./nvshmemHelloWorld.out