1. What's New

Welcome to the 23.9 version of the NVIDIA HPC SDK, a comprehensive suite of compilers and libraries enabling developers to program the entire HPC platform, from the GPU foundation to the CPU and out through the interconnect. The 23.9 release of the HPC SDK includes new features as well as important functionality and performance improvements.

  • HPC SDK 23.9 incorporates many updates with newer features and improved performance for Grace-Grace and Grace-Hopper systems. Component updates include NCCL, NVSHMEM, NSight Systems, NSight Compute, HPC-X and libcu++, as well as many improvements to the HPC Compilers.
  • C++20 coroutines are now supported by nvc++ for execution on the host CPU. This feature can be enabled with the -fcoroutines.
  • The “auto” install mode has been added and may be selected when using the HPC SDK installer or by setting NVHPC_INSTALL_TYPE=auto. By selecting the “auto” install mode, the compiler localrc configuration file is saved in $HOME/.config/NVIDIA/nvhpc rather than in the HPC SDK installation directory. In an upcoming release, the “auto” install mode will be the default.
  • The 23.9 version of nvfortran changes the calling/return sequence for Fortran complex functions to match GNU's gfortran convention. All Fortran code, including libraries that uses complex numbers, must be recompiled when using nvfortran on Arm systems. Please see the "Deprecations and Changes" below for more information.

2. Release Component Versions

The NVIDIA HPC SDK 23.9 release contains the following versions of each component:

Table 1. HPC SDK Release Components
  Linux_x86_64 Linux_ppc64le Linux_aarch64
  CUDA 11.0 CUDA 11.8 CUDA 12.2 CUDA 11.0 CUDA 11.8 CUDA 12.2 CUDA 11.0 CUDA 11.8 CUDA 12.2
nvc++ 23.9 23.9 23.9
nvc 23.9 23.9 23.9
nvfortran 23.9 23.9 23.9
nvcc 11.0.221 11.8.89 12.2.53 11.0.221 11.8.89 12.2.53 11.0.221 11.8.89 12.2.53
NCCL 2.18.3 2.18.3 2.18.3 2.18.3 2.18.3 2.18.3 2.19.1 2.19.1 2.19.1
NVSHMEM 2.10.1 2.10.1 2.10.1 2.10.1 2.10.1 2.10.1 N/A N/A 2.10.1
cuFFTMp N/A 11.0.5 11.0.5 N/A 11.0.5 N/A N/A N/A N/A
cuSOLVERMp N/A 0.4.1 N/A N/A N/A N/A N/A N/A N/A
cuTENSOR 1.7.0 1.7.0 1.7.0 1.7.0 1.7.0 1.7.0 1.7.0 1.7.0 1.7.0
Nsight Compute 2023.2.1 2023.2.1 2023.2.1
Nsight Systems 2023.3.1 2023.3.1 2023.3.1
OpenMPI 3.1.5 3.1.5 3.1.5
HPC-X N/A 2.14 2.16 N/A N/A N/A N/A 2.14 2.16
OpenBLAS 0.3.23 0.3.23 0.3.23
Scalapack 2.2.0 2.2.0 2.2.0
Thrust 1.9.9 1.15.1 2.0.1 1.9.9 1.15.1 2.0.1 1.9.9 1.15.1 2.0.1
CUB 1.9.9 1.15.1 2.0.1 1.9.9 1.15.1 2.0.1 1.9.9 1.15.1 2.0.1
libcu++ 1.0.0 1.8.1 2.1.1 1.0.0 1.8.1 2.1.1 1.0.0 1.8.1 2.1.1

3. Supported Platforms

3.1. Platform Requirements for the HPC SDK

Table 2. HPC SDK Platform Requirements
Architecture Linux Distributions Minimum gcc/glibc Toolchain Minimum CUDA Driver

RHEL/CentOS 7.3 - 7.9
RHEL/CentOS/Rocky 8.0 - 8.7
Fedora 33, 34
OpenSUSE Leap 15.2 - 15.4
SLES 15SP2, 15SP3, 15SP4
Ubuntu 18.04, 20.04, 22.04
Debian 10

C99: 4.8
C11: 4.9
C++03: 4.8
C++11: 4.9
C++14: 5.1
C++17: 7.1
C++20: 10.1


RHEL 7.3 - 7.7
RHEL 8.0 - 8.7

C99: 4.8
C11: 4.9
C++03: 4.8
C++11: 4.9
C++14: 5.1
C++17: 7.1
C++20: 10.1


RHEL/CentOS/Rocky 8.0 - 8.7
Ubuntu 20.04, 22.04
SLES 15SP2, 15SP3, 15SP4
Amazon Linux 2

C99: 4.8
C11: 4.9
C++03: 4.8
C++11: 4.9
C++14: 5.1
C++17: 7.1
C++20: 10.1


Programs generated by the HPC Compilers for x86_64 processors require a minimum of AVX instructions, which includes Sandy Bridge and newer CPUs from Intel, as well as Bulldozer and newer CPUs from AMD. POWER 8 and POWER 9 CPUs from the POWER architecture are supported. The HPC SDK includes support for v8.1+ Server Class Arm CPUs that meet the requirements appendix E specified in the SBSA 7.1 specification.

The HPC Compilers are compatible with gcc and g++ and use the GCC C and C++ libraries; the minimum compatible versions of GCC are listed in Table 2. The minimum system requirements for CUDA and NVIDIA Math Library requirements are available in the NVIDIA CUDA Toolkit documentation.

3.2. Supported CUDA Toolchain Versions

The NVIDIA HPC SDK uses elements of the CUDA toolchain when building programs for execution with NVIDIA GPUs. Every HPC SDK installation package puts the required CUDA components into an installation directory called [install-prefix]/[arch]/[nvhpc-version]/cuda.

An NVIDIA CUDA GPU device driver must be installed on a system with a GPU before you can run a program compiled for the GPU on that system. The NVIDIA HPC SDK does not contain CUDA Drivers. You must download and install the appropriate CUDA Driver from NVIDIA , including the CUDA Compatibility Platform if that is required.

The nvaccelinfo tool prints the CUDA Driver version in its output. You can use it to find out which version of the CUDA Driver is installed on your system.

The NVIDIA HPC SDK 23.9 includes the following CUDA toolchain versions:
  • CUDA 11.0
  • CUDA 11.8
  • CUDA 12.2
The minimum required CUDA driver versions are listed in the table in Section 3.1.

4.  Known Limitations

  • Passing an internal procedure as an actual argument to a Fortran subprogram is supported by nvfortran provided that the dummy argument is declared as an interface block or as a procedure dummy argument. Nvfortran does not support internal procedures as actual arguments to dummy arguments declared external.
  • Some applications may see failures on Haswell and Broadwell with MKL version 2023.1.0 when running certain workloads with 4 or more OpenMP threads. The issue is resolved in MKL version 2023.2.0.
  • cuSolverMp has two new dependencies on UCC and UCX libraries in the HPC-X directory. To execute a program linked against cuSolverMP, please use the “nvhpc-hpcx-cuda11” environment module for the HPC-X library, or set the environment variable LD_LIBRARY_PATH as follows: LD_LIBRARY_PATH=${NVHPCSDK_HOME}/comm_libs/11.8/hpcx/latest/ucc/lib:${NVHPCSDK_HOME}/comm_libs/11.8/hpcx/latest/ucx/lib:$LD_LIBRARY_PATH
  • To use HPC-X, please use the provided environment module files or take care to source the hpcx-init.sh script: $ . /[install-path]/Linux_x86_64/dev/comm_libs/X.Y/hpcx/latest/hpcx-init.sh Then, run the hpcx_load function defined by this script: $ hpcx_load These actions will set important environment variables that are needed when running HPC-X. The following warning from HPC-X while running an MPI job – “WARNING: Open MPI tried to bind a process but failed. This is a warning only; your job will continue, though performance may be degraded” – is a known issue, and may be suppressed as follows: export OMPI_MCA_hwloc_base_binding_policy=""
  • Fortran derived type objects with zero-size derived type allocatable components that are used in sourced allocation or allocatable assignment may result in a runtime segmentation violation.
  • When using -⁠stdpar to accelerate C++ parallel algorithms, the algorithm calls cannot include virtual function calls or function calls through a function pointer, cannot use C++ exceptions, can only dereference pointers that point to the heap, and must use random access iterators (raw pointers as iterators work best).

5.  Deprecations and Changes

  • Arm (aarch64) only: The 23.9 version of nvfortran changes the calling/return sequence for Fortran complex functions to match GNU's gfortran convention. Prior to the 23.9 release, nvfortran functions returned complex values via the stack using a "hidden" pointer as the first parameter. Now, complex values are returned following the gforran convention via the floating-point registers. All libraries released with NVIDIA HPC SDK for Arm have been updated to follow the "gfortran" method. Users linking against Arm's performance libraries will need to use the "gcc" version instead of the "arm" version. All Fortran code, including libraries, that uses complex numbers must be recompiled when using nvfortran on Arm systems.
  • Support for CUDA Fortran textures is deprecated in CUDA 11.0 and 11.8, and has been removed from CUDA 12. The 23.9 release is the last version of the HPC Compilers to include support for CUDA Fortran texture.
  • The default MPI implementation selected by the modulefiles included with the HPC SDK will be changed to HPC-X in a future release.
  • The -Minfo=intensity option is no longer supported.
  • The CUDA_HOME environment variable is ignored by the HPC Compilers. It is replaced by NVHPC_CUDA_HOME.
  • The -Mipa option has been disabled starting with the 23.3 version of the HPC Compilers.
  • The -ta=tesla, -Mcuda, -Mcudalib options for the HPC Compilers have been deprecated.
  • Support for the RHEL 7-based operating systems will be removed in the HPC SDK version 23.7, corresponding with the upstream end-of-life (EOL).
  • In an upcoming release the HPC SDK will bundle only CUDA 11.8 and the latest version of the CUDA 12.x series. Codepaths in the HPC Compilers that support CUDA versions older than 11.0 will no longer be tested or maintained.
  • Support for the Ubuntu 18.04 operating system will be removed in the HPC SDK version 23.5, corresponding with the upstream end-of-life (EOL).
  • cudaDeviceSynchronize() in CUDA Fortran has been deprecated, and support has been removed from device code. It is still supported in host code.
  • Starting with the 21.11 version of the NVIDIA HPC SDK, the HPC-X package is no longer shipped as part of the packages made available for the POWER architecture.
  • Starting with the 21.5 version of the NVIDIA HPC SDK, the -cuda option for NVC++ and NVFORTRAN no longer automatically links the NVIDIA GPU math libraries. Please refer to the -cudalib option.
  • HPC Compiler support for the Kepler architecture of NVIDIA GPUs was deprecated starting with the 21.3 version of the NVIDIA HPC SDK.




Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.


NVIDIA, the NVIDIA logo, CUDA, CUDA-X, GPUDirect, HPC SDK, NGC, NVIDIA Volta, NVIDIA DGX, NVIDIA Nsight, NVLink, NVSwitch, and Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

NVIDIA websites use cookies to deliver and improve the website experience. See our cookie policy for further details on how we use cookies and how to change your cookie settings.