Installation#
Downloading NVSHMEM#
To download NVSHMEM, ensure that you are registered for the NVIDIA Developer Program.
Go to NVIDIA NVSHMEM home page.
Click Get Started.
Accept the Terms and Conditions. The page displays a button labeled with the number of the latest NVSHMEM release.
To download the latest release of NVSHMEM, click Download.
To download an earlier release, click NVSHMEM Archive and select the NVSHMEM version you want to install.
Refer to the following sections to select the correct package depending on your Linux distribution.
Installing NVSHMEM#
Ubuntu#
Before you install NVSHMEM on Ubuntu:
Add a repository to the APT system that contains the NVSHMEM packages.
Install the NVSHMEM packages with one of the following repositories using APT:
A local repository.
A network repository.
To retrieve upgrades when newer versions are posted, we recommend that you select the network repository.
Install the repository.
In the following commands, replace
<architecture>
with your CPU architecture,x86_64
orsbsa
, and replace<distro>
with the Ubuntu version, for exampleubuntu2004
,ubuntu2204
, orubuntu2404
.For a local NVSHMEM repository:
sudo dpkg -i nvshmem-repo-<version>.deb
Note
The local repository installation prompts you to install the local key, which it embeds, and with which packages are signed. Make sure to follow the instructions to install the local key, or the install phase will fail later.
For the network repository:
wget https://developer.download.nvidia.com/compute/cuda/repos/<distro>/<architecture>/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb
Update the APT database:
sudo apt update
Install the package
libnvshmem3-cuda-12
with APT. Additionally, to compile applications with NVSHMEM, you can install the packagelibnvshmem3-dev-cuda-12
.Note
If you are using the network repository, you can enter the following command to upgrade NVSHMEM to the latest version:
sudo apt install libnvshmem3-cuda-12 libnvshmem3-dev-cuda-12
To keep an earlier version of NVSHMEM and CUDA, specify a specific version, for example:
sudo apt install libnvshmem3=cuda-11_3.1.7-1 libnvshmem3-dev=cuda-11_3.1.7-1
Refer to the download page for exact package versions.
RHEL/CentOS#
Before you install NVSHMEM on RHEL or CentOS:
Add a repository to the YUM system that contains the NVSHMEM packages.
Install the NVSHMEM packages with one of the repositories through YUM:
A local repository.
A network repository.
To retrieve upgrades more easily when newer versions are posted, we recommend that you use a network repository.
Install the repository.
In the following commands, replace
<architecture>
with your CPU architecture,x86_64
orsbsa
.For a local NVSHMEM repository:
sudo rpm -i nvshmem-repo-<version>.rpm
For a network repository:
RHEL 8:
sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/<architecture>/cuda-rhel8.repo
RHEL 9:
sudo dnf config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel9/<architecture>/cuda-rhel9.repo
Install the package
libnvshmem
with YUM. To compile applications with NVSHMEM, install the packagelibnvshmem-devel
, and if you plan to link NVSHMEM statically in your application, install the packagelibnvshmem-static
:Note
If you are using the network repository, you can enter the following command to upgrade NVSHMEM to the latest version.
sudo yum install libnvshmem3 libnvshmem3-devel libnvshmem3-static
To keep an earlier version of NVSHMEM and CUDA, specify a specific version, for example:
sudo yum install libnvshmem3-cuda-11-3.1.7-1 libnvshmem3-devel-cuda-11-3.1.7-1 libnvshmem3-static-cuda-11-3.1.7-1
Refer to the download page for exact package versions.
Other Distributions#
Download the tar file package by selecting OS-agnostic installer. For more information, see Downloading NVSHMEM.
Extract the NVSHMEM package to your home directory, or to
/usr/local
if you are installing it as root for all users:# cd /usr/local # tar xvf nvshmem-linux-<version>.txz
When compiling applications, specify the directory path to where you installed NVSHMEM, for example
/usr/local/nvshmem-linux-<version>/
.After downloading and unpacking the archive, the extracted directory contains the following files and subdirectories:
File or Directory |
Description |
---|---|
|
Contains NVSHMEM sources and headers. |
|
Contains tests showing use of NVSHMEM APIs with performance reporting. |
|
Contains examples showing some common use cases of NVSHMEM. |
|
Contains helper scripts, for example, a script to download, build, and install Hydra. |
|
Change the repository history. |
|
NVSHMEM Software License Agreement and copyright information. |
Set the environment variable
CUDA_HOME
to point to the CUDA Toolkit.Set the environment variable
GDRCOPY_HOME
to point to the GDRCopy installation.To build without GDRCopy, set the environment variable to
NVSHMEM_USE_GDRCOPY=0
.Note
Without GDRCopy, atomics are only supported across NVLink connections.
If MPI and/or SHMEM support is required, set
NVSHMEM_MPI_SUPPORT=1
and/orNVSHMEM_SHMEM_SUPPORT=1
.Set the environment variables
MPI_HOME
andSHMEM_HOME
to point to the MPI and OpenSHMEM installations, respectively.By default, the
mpicc
tool that is used during NVSHMEM compilation is loaded from$MPI_HOME/bin/mpicc
. You can override this default by setting the environment variableMPICC
to the tool’s pathname.Note
When using Open MPI and OSHMEM, the paths are the same.
To use OSHMEM, Open MPI needs to be built with UCX support.
NVSHMEM has been tested with Open MPI 4.0.1 and UCX 1.10.
Other MPI and OpenSHMEM installations should work.
By default, MPI support is enabled, and OpenSHMEM support is disabled.
Optional: To enable UCX support, set
NVSHMEM_UCX_SUPPORT=1
andUCX_HOME
to the installed UCX directory.Optional: The enable libfabric support, set
NVSHMEM_LIBFABRIC_SUPPORT=1
andLIBFABRIC_HOME
to the installed libfabric directory.Optional: To enable NCCL support, set
NVSHMEM_USE_NCCL=1
andNCCL_HOME
to the installed NCCL directory.Optional: To enable the InfiniBand GPUDirect Async (IBGDA) transport, set
NVSHMEM_IBGDA_SUPPORT=1
Optional: To enable PMIx support, set
NVSHMEM_PMIX_SUPPORT=1
andPMIX_HOME
to the installed PMIx directory.Configure the default bootstrap:
YOu can select the PMI bootstrap method by using the environment variable
NVSHMEM_BOOTSTRAP_PMI
at runtime.PMI-1, which can be used with the Hydra launcher, is the default PMI standard used by NVSHMEM.
To select PMIx as the default PMI interface, set
NVSHMEM_DEFAULT_PMIX=1
.To select PMI-2 as the default PMI interface, set
NVSHMEM_DEFAULT_PMI2=1
.
Set
NVSHMEM_PREFIX
to specify the location where NVSHMEM is to be installed.To build and install the library, run
cmake
andmake -j install
.
Using the NVSHMEM CMake Build System#
NVSHMEM now supports building only with CMake version 3.19 or later. Refer to the following sections for a comparison with the deprecated and removed make build system.
The CMake build system is backward-compatible with the environment variables used in the original Makefile. That is, the same environment produces a comparable build whether make or CMake is used for the build.
CMake natively supports some environment and CMake variables for facilitating discovery of NVSHMEM dependencies (e.g. MPI and CUDA). These native settings can be used within the context of NVSHMEM, but describing them is outside the scope of this document.
Additionally, with the exception of NVSHMEM_HOME (which was superseded by NVSHMEM_PREFIX for the install prefix), all previous environment variables are respected when passed as CMake variables.
The steps outlined below describe typical build steps for NVSHMEM when using CMake:
If you are setting build configurations through the environment, follow the steps listed in the Other Distributions section above.
To create the makefiles for building NVSHMEM call:
cmake [-D{VAR_NAME_1={VAR_VAL_1}} ...-D{VAR_NAME_N={VAR_VAL_N}} -S . -B {PATH_TO_BUILD_DIR}
Where
VAR_NAME_X
can be any previously accepted environment variable or native CMake variable.Note
If the variables are already set in the environment, it is not necessary to set them again on the command line. If variables are set on both the command line and the environment, the command line supersedes the environment.
To make NVSHMEM, enter:
cd {PATH_TO_BUILD_DIR} && make [-j] [install]
Note
CMake does not respect
NVCC_GENCODE
. Instead, use the CMake variableCUDA_ARCHITECTURE
. As an example, setCUDA_ARCHITECTURE
to this value to compile for Volta and Ampere GPUs:cmake -DCUDA_ARCHITECTURES="70;80”
Note
When building with CMake from the source packages, you can disable perftests and examples with CMake by setting variables
NVSHMEM_BUILD_TESTS
andNVSHMEM_BUILD_EXAMPLES
, respectively, to 0.Note
You can build binary packages from the source package by setting the CMake variable
NVSHMEM_BUILD_PACKAGES
to 1.You also can exercise granular control over whether to produce RPM or DEB files with the CMake variables
NVSHMEM_BUILD_DEB_PACKAGE
andNVSHMEM_BUILD_RPM_PACKAGE
. These are set to 1 by default.Note
Building NVSHMEM from source builds the Python bindings (NVSHMEM4Py) by default starting with the NVSHMEM 3.3 release. If you do not want to build NVSHMEM4Py, you can disable it with the command:
cmake -DNVSHMEM_BUILD_PYTHON_LIB=OFF
Integrating NVSHMEM Into CMake Projects#
NVIDIA provides CMake configuration files in the NVSHMEM binary packages it ships, including tarballs, RPMs, and DEBs.
You can find the NVSHMEM package with the config mode of the find_package
command.
find_package(NVSHMEM REQUIRED)
target_link_libraries(my_target PRIVATE nvshmem::nvshmem_host)
target_link_libraries(my_target PRIVATE nvshmem::nvshmem_device)
The nvshmem::nvshmem_host
and nvshmem::nvshmem_device
targets are provided separately to allow for applications only using host-side or on-stream APIs to avoid statically linking the device-side library.
Using NVSHMEM in Your Applications#
Launching NVSHMEM Programs#
NVSHMEM supports the following methods of launching your applications:
Using a PMI-1 compatible launcher, such as Hydra.
Using a PMI-2 compatible launcher, such as Slurm.
Using a PMIx compatible launcher, such as Slurm or Open MPI mpirun.
Launching as part of an existing MPI application.
Launching as part of an existing OpenSHMEM application.
The PMI-1 and PMI-2 clients are in NVSHMEM and are automatically built as part of the build process. You must provide a PMIx client by installing Open PMIx or use the PMIx client that is installed by Open MPI or Slurm. When you build Open MPI, include the --enable-install-libpmix configure
option. When you build NVSHMEM, set NVSHMEM_PMIX_SUPPORT=1
and PMIX_HOME=/path/to/openmpi
.
To select the correct PMI library at runtime, set NVSHMEM_BOOTSTRAP_PMI
to PMI
, PMI-2
, or PMIx
. To bootstrap NVSHMEM by using MPI or OpenSHMEM, start the application in the typical way, start MPI or OpenSHMEM, and then call the nvshmemx_init_attr
function to inform NVSHMEM that NVSHMEM is running as part of an existing MPI or OpenSHMEM job.
Using NVSHMEM with Multiple Processes Per GPU#
Starting with release 2.5.0, NVSHMEM supports multiple processes per GPU (MPG), which does not require additional configuration and can be run with or without the CUDA Multi-Process Service (MPS) enabled.
If MPS is not enabled, however, only the following APIs are supported:
Point-to-point RMA
nvshmem_barrier_all()
hostnvshmemx_barrier_all_on_stream()
nvshmem_sync_all()
hostnvshmemx_sync_all_on_stream()
To enable complete NVSHMEM MPG support, the NVIDIA MPS server must be installed and running on the system. To enable support for the complete API, the MPS server must also be configured to place a limit on the total GPU utilization of a maximum of 100%.
The NVSHMEM library automatically detects when it is running on a system with more processes than GPUs, and distributes the processes accordingly. It also automatically detects the presence of the MPS server daemon and GPU utilization configuration and enables the APIs accordingly. If an unsupported API is used in a limited MPG run, an error message is printed, and the application exits.
Using NVSHMEM with Your C or C++ Program#
Include
nvshmem.h
andnvshmemx.h
frominclude/
.Point to the
include/
andlib/
paths.NVSHMEM users: If your C or C++ program only uses NVSHMEM, install Hydra Process Manager using the
install_hydra.sh
bash script in thescripts/
directory.Provide the download and install location as arguments, for example:
./install_hydra.sh <download_path> <install_path>
To run the NVSHMEM job, use
nvshmrun
launcher, which is located underbin/
in the Hydra install path.
Using NVSHMEM with Your MPI or OpenSHMEM Program#
Note
Currently the only tested MPI library is Open MPI, but any standard compliant MPI library should work.
To run a Hybrid MPI + NVSHMEM program, use the mpirun
launcher in the MPI installation.
Similarly, you can use NVSHMEM from OpenSHMEM programs, and you must use the corresponding launcher for the OpenSHMEM library. Currently the only tested OpenSHMEM version is OSHMEM in Open MPI. Other OpenSHMEM implementations, such as Sandia OpenSHMEM (SOS) should also work, but these implementations have not been tested. To run the hybrid OpenSHMEM/NVSHMEM job, use the oshrun
launcher in the OpenMPI installation or follow the launcher specification of your OpenSHMEM library.
NVSHMEM relies on a plug-in system for bootstrapping. By default, an MPI bootstrap plug-in is built for NVSHMEM and is installed in $(NVSHMEM_HOME)/lib. If this directory is not in your dynamic linker search path, you might need to add it to $LD_LIBRARY_PATH
. This MPI plug-in is selected automatically at runtime if the nvshmemx_init_attr
initialization function is used to request the MPI bootstrap, or if NVSHMEM_BOOTSTRAP=”MPI”
is set.
The source code of the MPI bootstrap plug-in is installed in $(NVSHMEM_HOME)/share/nvshmem/src/bootstrap-plugins and can be built separately from the NVSHMEM library (for example, to support additional MPI libraries). Custom bootstrap plugins are also possible and should implement the interface that is defined in $(NVSHMEM_HOME)/include/nvshmem_bootstrap.h
. Plug-ins must be built as relocatable shared objects.
After the external plug-in library is built, it can be specified to NVSHMEM at runtime by specifying NVSHMEM_BOOTSTRAP=”plugin”
and NVSHMEM_BOOTSTRAP_PLUGIN=”[name of plugin]”
. For example, NVSHMEM_BOOTSTRAP=”MPI”
is equal to NVSHMEM_BOOTSTRAP=”plugin”
and NVSHMEM_BOOTSTRAP_PLUGIN=”nvshmem_bootstrap_mpi.so”
.
Running Performance Tests#
Before you can run performance tests, you first must build them.
If the NVSHMEM library was built with
NVSHMEM_MPI_SUPPORT=1
, set the environment variablesCUDA_HOME
,NVSHMEM_HOME
andMPI_HOME
to build NVSHMEM performance tests:CUDA_HOME=<path to supported CUDA installation> NVSHMEM_HOME=<path to directory where NVSHMEM is installed> MPI_HOME=<path to MPI installation>
If you built NVSHMEM with MPI and OpenSHMEM support (
NVSHMEM_MPI_SUPPORT=1
andNVSHMEM_SHMEM_SUPPORT=1
), MPI and OpenSHMEM support must be enabled when you buildperftest/
.Build without SHMEM interoperability: To build NVSHMEM performance tests without SHMEM interoperability, set the environment variable
NVSHMEM_SHMEM_SUPPORT
to0
. By default, performance tests are installed underperftest/perftest_install
. To install to a different path, setNVSHMEM_PERFTEST_INSTALL
to point to the correct path.Update
LD_LIBRARY_PATH
to point to$CUDA_HOME/lib64
,$MPI_HOME/lib
, and$NVSHMEM_HOME/lib
.Assuming Hydra is installed under
HYDRA_HOME
, run performance tests as NVSHMEM jobs, hybrid MPI+NVSHMEM jobs, or hybrid OpenSHMEM+NVSHMEM jobs with the following commands (usingperftest/device/pt-to-pt/put.cu
as an example).In each case,
<GPUs>
represents the number of GPUs to use. It may be any integer up to the number of GPUs that are accessible through P2P or InfiniBand NICs.NVSHMEM job using Hydra (PMI-1):
$HYDRA_HOME/bin/nvshmrun -n <GPUs> $NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
NVSHMEM job using Slurm:
srun -n <uGPUs> $NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
Note
If Slurm was built with a PMI that does not match the default of NVSHMEM, for example, if Slurm was built with PMIx support and
NVSHMEM_DEFAULT_PMIX=1
was not set when building NVSHMEM, you can useNVSHMEM_BOOTSTRAP_PMI
to override the default. Possible values arePMIX
,PMI-2
, andPMI
. You can use the Slurm--mpi=
option to srun to tell Slurm which PMI interface to use.Hybrid MPI/NVSHMEM job:
$MPI_HOME/bin/mpirun -n <GPUs> -x NVSHMEMTEST_USE_MPI_LAUNCHER=1 $NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
Hybrid OpenSHMEM/NVSHMEM job
$MPI_HOME/bin/oshrun -n <GPUs> -x USE_SHMEM_IN_TEST=1 $NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
“Hello World” Example#
Save the following code as
nvshmemHelloWorld.cu
:#include <stdio.h> #include <cuda.h> #include <nvshmem.h> #include <nvshmemx.h> __global__ void simple_shift(int *destination) { int mype = nvshmem_my_pe(); int npes = nvshmem_n_pes(); int peer = (mype + 1) % npes; nvshmem_int_p(destination, mype, peer); } int main(void) { int mype_node, msg; cudaStream_t stream; nvshmem_init(); mype_node = nvshmem_team_my_pe(NVSHMEMX_TEAM_NODE); cudaSetDevice(mype_node); cudaStreamCreate(&stream); int *destination = (int *) nvshmem_malloc(sizeof(int)); simple_shift<<<1, 1, 0, stream>>>(destination); nvshmemx_barrier_all_on_stream(stream); cudaMemcpyAsync(&msg, destination, sizeof(int), cudaMemcpyDeviceToHost, stream); cudaStreamSynchronize(stream); printf("%d: received message %d\n", nvshmem_my_pe(), msg); nvshmem_free(destination); nvshmem_finalize(); return 0; }
Build
nvshmemHelloWorld.cu
with one of the following commands,When using dynamic linking:
nvcc -rdc=true -ccbin g++ -gencode=$NVCC_GENCODE -I $NVSHMEM_HOME/include nvshmemHelloWorld.cu -o nvshmemHelloWorld.out -L $NVSHMEM_HOME/lib -lnvshmem_host -lnvshmem_device
When using static linking:
nvcc -rdc=true -ccbin g++ -gencode=$NVCC_GENCODE -I $NVSHMEM_HOME/include nvshmemHelloWorld.cu -o nvshmemHelloWorld.out -L $NVSHMEM_HOME/lib -lnvshmem -lnvidia-ml -lcuda -lcudart
Where
arch=compute_70,code=sm_70
is the value ofNVCC_GENCODE
for V100 GPUs.
Run the
nvshmemHelloWorld
sample with one of the following commands:When running on one host with two GPUs (connected by PCI-E, NVLink or Infiniband):
$HYDRA_HOME/bin/nvshmrun -n 2 -ppn 2 ./nvshmemHelloWorld.out
When running on two hosts with one GPU per host connected by InfiniBand:
$HYDRA_HOME/bin/nvshmrun -n 2 -ppn 1 –-hosts hostname1,hostname2 ./nvshmemHelloWorld.out
NVSHMEM LLVM Bitcode Library-Based Hello World Example#
tarting with NVSHMEM release 3.2, the Python pip wheels and conda packages ship with an LLVM-IR library that exposes the device-side APIs (libnvshmem_device.bc
). The nvidia-nvshmem-cu12
Python packages ship with a bitcode library that supports the Hopper and Blackwell architectures, and the nvidia-nvshmem-cu11
library supports the Volta and Ampere architectures. Here is an example of lowering the “hello world” application that was created in “Hello World” Example through LLVM IR, linking it to libnvshmem_device.bc
, and compiling a cubin from the linked LLVM-IR which can be imported into an NVSHMEM application using nvshmemx_cumodule_init
.
Lower the device code of the application to LLVM-IR using Clang. The
NVSHMEM_BITCODE_APPLICATION
variable instructs the compiler to include only the appropriate headers for lowering to LLVM-IR.clang -c emit-llvm -std=c++11 -x cuda --cuda-path=${CUDA_HOME} --cuda-device-only --cuda-gpu-arch={desired architecture} -I$NVSHMEM_HOME/include -DNVSHMEM_BITCODE_APPLICATION nvshmemHelloWorld.cu nvshmemHelloWorldPrelink.bc
Link and optimize the application using the LLVM utilities
llvm-link
andopt
. Theonly-needed
argument significantly reduceS the size of the resulting binary.llvm-link --only-needed nvshmemHelloWorldPrelink.bc $NVSHMEM_HOME/lib/libnvshmem_device.bc -o nvshmemHelloWorldPreopt.bc opt -O3 nvshmemHelloWorldPreopt.bc -o nvshmemHelloWorld.bc
Lower the code to PTX using the
llc
LLVM utility.llc -O3 --disable-tail-calls -mcpu={desired architecture}-mattr=ptx{YY} nvshmemHelloWorld.bc -o nvshmemHelloWorld.ptx
Assemble the PTX to SASS using
ptxas
from the CUDA toolkit. You can do this in one or two steps, but performing the link step separately has been found to improve performance.Note
There is no separate library in the link step.
ptxas -c -arch={desired architecture} nvshmemHelloWorld.ptx -o nvshmemHelloWorld_prelink.cubin nvlink -arch={desired architecture} nvshmemHelloWorld_prelink.cubin -o nvshmemHelloWorld.cubin