Holoscan SDK v4.0.0

Relevant Technologies

Holoscan accelerates streaming AI applications by leveraging both hardware and software. The Holoscan SDK relies on multiple core technologies to achieve low latency and high throughput:

The NVIDIA Developer Kits equipped with a ConnectX network adapter can be used along with the NVIDIA Rivermax SDK to provide an extremely efficient network connection that is further optimized for GPU workloads by using GPUDirect for RDMA. This technology avoids unnecessary memory copies and CPU overhead by copying data directly to or from pinned GPU memory, and supports both the integrated GPU or the discrete GPU.

Note

NVIDIA is committed to supporting hardware vendors enabling RDMA within their own drivers, an example of which is provided by the AJA Video Systems, as part of a partnership with NVIDIA for the Holoscan SDK. The AJASource operator is an example of how the SDK can leverage RDMA.

For more information about GPUDirect RDMA, see the following:

GXF (Graph Execution Framework) is an NVIDIA-internal graph execution framework that forms the foundation of the Holoscan SDK. GXF provides a low-level entity-component system for building and executing computation graphs, including schedulers, memory allocators, message passing, and a YAML-based graph definition format.

The Holoscan SDK provides a developer-friendly C++ and Python APIs that abstract away GXF internals, culminating in a fully native operator and application model. Today, most Holoscan SDK users do not need to interact with GXF directly.

GXF core concepts

For historical context and to help interpret older code or documentation, here is a mapping of GXF concepts to their Holoscan SDK equivalents:

GXF Concept

Holoscan SDK Equivalent

Description

Entity (implicit) A node in the computation graph; a container for components. In the Holoscan SDK, an Operator implicitly represents an entity.
Codelet Operator A component that executes custom code via lifecycle methods (start, tick/compute, stop).
Component Resource Supporting functionality such as memory allocators, clocks, or serializers attached to an entity.
Scheduling Term Condition A predicate that determines when an operator is ready for execution.
Receiver / Transmitter Input / Output Port Message-passing endpoints between operators.
Connection Flow (Edge) A directed edge in the application graph connecting an output port to an input port.
Scheduler Scheduler Orchestrates the execution of operators based on their conditions.
GXF Extension Operator / Resource library A shared library that registers components with the runtime. Native Holoscan operators do not require GXF extension registration.

NVIDIA TensorRT is a deep learning inference framework based on CUDA that provided the highest optimizations to run on NVIDIA GPUs, including the NVIDIA Developer Kits.

The inference module leverages TensorRT among other backends, and provides the ability to execute multiple inferences in parallel.

Vulkan is commonly used for real-time visualization and, like CUDA, is executed on the GPU. This provides an opportunity for efficient sharing of resources between CUDA and this rendering framework.

The Holoviz module uses the external resource interoperability functions of the low-level CUDA driver application programming interface, the Vulkan external memory and external semaphore extensions.

Streaming image processing often requires common 2D operations like resizing, converting bit widths, and changing color formats. NVIDIA has built the CUDA accelerated NVIDIA Performance Primitive Library (NPP) that can help with many of these common transformations. NPP is extensively showcased in the Format Converter operator of the Holoscan SDK.

The Unified Communications X (UCX) framework is an open-source communication framework developed as a collaboration between industry and academia. It provides high-performance point-to-point communication for data-centric applications. Holoscan SDK uses UCX to send data between fragments in distributed applications. UCX’s high level protocols attempt to automatically select an optimal transport layer depending on the hardware available. For example technologies such as TCP, CUDA memory copy, CUDA IPC and GPUDirect RDMA are supported.

The Holoscan SDK integrates the MatX library, a high-performance C++17 library for numerical computing on NVIDIA GPUs.

The library is accessible in C++ applications through the holoscan::matx interface library. It enables zero-copy data exchange between MatX tensors (matx::tensor) and holoscan::Tensor via the DLPack standard.

To use MatX in a C++ application, link against the holoscan::matx target in CMakeLists.txt:

Copy
Copied!
            

target_link_libraries(my_application PRIVATE holoscan::core holoscan::matx )

A new C++ example, matx_basic, is available in the examples/matx/matx_basic directory to demonstrate creating, sharing, and performing GPU-accelerated operations on MatX tensors within a Holoscan pipeline.

Previous Overview
Next Getting Started with Holoscan
© Copyright 2022-2026, NVIDIA. Last updated on Mar 9, 2026