NVIDIA 2D Image And Signal Performance Primitives (NPP)  Version 11.5.0.*
 All Data Structures Functions Variables Typedefs Enumerations Enumerator Groups Pages
NVIDIA 2D Image and Signal Processing Performance Primitives

What is NPP?

NVIDIA NPP is a library of functions for performing CUDA accelerated 2D image and signal processing. The primary set of functionality in the library focuses on image processing and is widely applicable for developers in these areas. NPP will evolve over time to encompass more of the compute heavy tasks in a variety of problem domains. The NPP library is written to maximize flexibility, while maintaining high performance.

NPP can be used in one of two ways:

Either route allows developers to harness the massive compute resources of NVIDIA GPUs, while simultaneously reducing development times. After reading this Main Page it is recommended that you read the General API Conventions page below and either the Image-Processing Specific API Conventions page or Signal-Processing Specific API Conventions page depending on the kind of processing you expect to do. Finally, if you select the Modules tab at the top of this page you can find the kinds of functions available for the NPP operations that support your needs.



NPP API is defined in the following files:

Header Files

All those header files are located in the following CUDA Toolkit's directory:


Library Files

NPP's functionality is split up into 3 distinct library groups:

On the Windows platform the NPP stub libraries are found in the CUDA Toolkit's library directory:


The matching DLLs are located in the CUDA Toolkit's binary directory. Example

* /bin/nppial64_111_<build_no>.dll  // Dynamic image-processing library for 64-bit Windows.

On Linux platforms the dynamic libraries are located in the lib directory and the names include major and minor version numbers along with build numbers

* /lib/libnppc.so.11.1.<build_no>   // NPP dynamic core library for Linux 

Library Organization

Note: The static NPP libraries depend on a common thread abstraction layer library called cuLIBOS (libculibos.a) that is now distributed as a part of the toolkit. Consequently, cuLIBOS must be provided to the linker when the static library is being linked against. To minimize library loading and CUDA runtime startup times it is recommended to use the static library(s) whenever possible. To improve loading and runtime performance when using dynamic libraries, NPP provides a full set of NPPI sub-libraries. Linking to only the sub-libraries that contain functions that your application uses can significantly improve load time and runtime startup performance. Some NPPI functions make calls to other NPPI and/or NPPS functions internally so you may need to link to a few extra libraries depending on what function calls your application makes. The NPPI sub-libraries are split into sections corresponding to the way that NPPI header files are split. This list of sub-libraries is as follows:

For example, on Linux, to compile a small color conversion application foo using NPP against the dynamic library, the following command can be used:

* nvcc foo.c -lnppc -lnppicc -o foo

Whereas to compile against the static NPP library, the following command has to be used:

* nvcc foo.c -lnppc_static -lnppicc_static -lculibos -o foo

It is also possible to use the native host C++ compiler. Depending on the host operating system, some additional libraries like pthread or dl might be needed on the linking line. The following command on Linux is suggested:

* g++ foo.c -lnppc_static -lnppicc_static -lculibos -lcudart_static -lpthread -ldl
* -I <cuda-toolkit-path>/include -L <cuda-toolkit-path>/lib64 -o foo

NPP is a stateless API, as of NPP 6.5 the ONLY state that NPP remembers between function calls is the current stream ID, i.e. the stream ID that was set in the most recent nppSetStream() call and a few bits of device specific information about that stream. The default stream ID is 0. If an application intends to use NPP with multiple streams then it is the responsibility of the application to use the fully stateless application managed stream context interface described below or call nppSetStream() whenever it wishes to change stream IDs. Any NPP function call which does not use an application managed stream context will use the stream set by the most recent call to nppSetStream() and nppGetStream() and other "nppGet" type function calls which do not contain an application managed stream context parameter will also always use that stream.

All NPP functions should be thread safe.

Note: New to NPP 11.5 are


Note: New to NPP 11.4 are (some were in NPP 11.2 and NPP 11.3 but NPP 11.4 has some API improvements)


Note: New to NPP 10.1 is support for the fp16 (__half) data type in GPU architectures of Volta and beyond in some NPP image processing functions. NPP image functions that support pixels of __half data types have function names of type 16f and pointers to pixels of that data type need to be passed to NPP as NPP data type Npp16f. Here is an example of how to pass image pointers of type __half to an NPP 16f function that should work on all compilers including Armv7.

* nppiAdd_16f_C3R(reinterpret_cast<const Npp16f *>((const void *)(pSrc1Data)), nSrc1Pitch,
* reinterpret_cast<const Npp16f *>((const void *)(pSrc2Data)), nSrc2Pitch,
* reinterpret_cast<Npp16f *>((void *)(pDstData)), nDstPitch,
* oDstROI);

Application Managed Stream Context

Note: Also new to NPP 10.1 is support for application managed stream contexts. Application managed stream contexts make NPP truely stateless internally allowing for rapid, no overhead, stream context switching. While it is recommended that all new NPP application code use application managed stream contexts, existing application code can continue to use nppSetStream() and nppGetStream() to manage stream contexts (also with no overhead now) but over time NPP will likely deprecate the older non-application managed stream context API. Both the new and old stream management techniques can be intermixed in applications but any NPP calls using the old API will use the stream set by the most recent call to nppSetStream() and nppGetStream() calls will also return that stream ID. All NPP function names ending in _Ctx expect application managed stream contexts to be passed as a parameter to that function. The new NppStreamContext application managed stream context structure is defined in nppdefs.h and should be initialized by the application to the Cuda device ID and values associated with a particular stream. Applications can use multiple fixed stream contexts or change the values in a particular stream context on the fly whenever a different stream is to be used.

Note: NPP 10.2 and beyond contain an additional element in the NppStreamContext structure named nStreamFlags which MUST also be initialized by the application. Failure to do so could unnecessarily reduce NPP performance in some functions.

Note: NPP does not support non blocking streams on Windows for devices working in WDDM mode.

Note that some of the "GetBufferSize" style functions now have application managed stream contexts associated with them and should be used with the same stream context that the associated application managed stream context NPP function will use.

Note that NPP does minimal checking of the parameters in an application managed stream context structure so it is up to the application to assure that they are correct and valid when passed to NPP functions.

Note that NPP has deprecated the nppicom JPEG compression library as of NPP 11.0, use the NVJPEG library instead.

Supported NVIDIA Hardware

NPP runs on all CUDA capable NVIDIA hardware. For details please see http://www.nvidia.com/object/cuda_learn_products.html