NVIDIA Holoscan SDK ltsb-2.0

SDK Installation

The section below refers to the installation of the Holoscan SDK referred to as the development stack, designed for the NVIDIA IGX Developer Kits, and for x86_64 Linux compute platforms, ideal for development and testing of the SDK.

Note

For Holoscan Developer Kits such as the IGX Orin Developer Kit, an alternative option is the deployment stack, based on OpenEmbedded (Yocto build system) instead of Ubuntu. This is recommended to limit your stack to the software components strictly required to run your Holoscan application. The runtime Board Support Package (BSP) can be optimized with respect to memory usage, speed, security and power requirements.

Set up your developer kit:

Developer Kit

User Guide

L4T

GPU Mode

NVIDIA IGX Orin Guide IGX OS 1.1.4-lws2 iGPU or* dGPU

You’ll need the following to use the Holoscan SDK on x86_64:

Authentication

To access LWS2 installations, you will first need an API Key from NGC. Select NGC Catalog as one of the services. Your API key should begin with nvapi-.

If you plan to only use the docker container, skip to the installation section. If you plan to install the debian package or python wheel, you will need to configure buildauthcli and authxsentry:

  1. These tools already come installed on IGX OS LWS2 (iGPU and dGPU). Run the following to install them on x86_64:

    Copy
    Copied!
                

    sudo wget <NEED_EXTERNAL_URL>/authxsentry -O /usr/bin/authxsentry wget <NEED_EXTERNAL_URL>/buildauthcli-linux-amd64-v2.0.tar.gz -O - | sudo tar -zx -C "/usr/bin/" sudo chmod +x /usr/bin/authxsentry sudo chmod +x /usr/bin/buildauth

  2. On both IGX and x86_64, run authxsentry with the NGC CLI API key you generated above:

    Copy
    Copied!
                

    authxsentry <nvapi-xxx> # Successful setup will end with `Login complete.`

Installation

We provide multiple ways to install and run the Holoscan SDK:

Login to the NVIDIA container registry:

Copy
Copied!
            

docker login nvcr.io # Username: $oauthtoken # Password: <NGC API key>

You can then pull the image from the NGC LWS2 repository:

  • dGPU (x86_64, IGX Orin dGPU)

    Copy
    Copied!
                

    docker pull nvcr.io/nvidia/holoscan-ltsb2:23.10.08-lws2.0.11-dgpu

  • iGPU (IGX Orin iGPU)

    Copy
    Copied!
                

    docker pull nvcr.io/nvidia/holoscan-ltsb2:23.10.08-lws2.0.11-igpu

To install the holoscan LWS2 debian packages, apt will need to point to the LWS2 repository. That is already configured on IGX OS LWS2 (iGPU and dGPU). Run the following to configure it on x86_64:

Copy
Copied!
            

sudo sed -i 's|^deb|#deb|g' /etc/apt/sources.list echo """ deb [arch=amd64 trusted=yes] https://disthub.nvidia.com/artifactory/oss-lws-deb-stable jammy main deb [arch=amd64 trusted=yes] https://disthub.nvidia.com/artifactory/nvaie-lws2-deb-stable lws lws2 """ | sudo tee /etc/apt/sources.list.d/lws2_amd64.list

Run the following to install the holoscan SDK debian package:

Copy
Copied!
            

sudo apt update sudo apt install holoscan

Attention

This will not install dependencies needed for the Torch nor ONNXRuntime inference backends. To do so, install transitive dependencies by adding the --install-suggests flag to apt install holoscan, and refer to the support matrix below for links to install libtorch and onnxruntime.

To leverage the python module included in the debian package (instead of installing the python wheel), include the path below to your python path:

Copy
Copied!
            

export PYTHONPATH="/opt/nvidia/holoscan/python/lib"

To install the holoscan LWS2 python wheel, pip will need to point to the LWS2 repository. That is already configured on IGX OS LWS2 (iGPU and dGPU). Run the following to configure it on x86_64:

Copy
Copied!
            

echo """ [global] index-url = https://disthub.nvidia.com/artifactory/api/pypi/nvaie-lws2-pypi-stable/simple extra-index-url = https://disthub.nvidia.com/artifactory/api/pypi/oss-lws2-pypi-stable/simple """ | sudo tee /etc/pip.conf

Run the following to install the holoscan SDK python wheel package:

Copy
Copied!
            

pip install holoscan

Note

For x86_64, ensure that the CUDA Toolkit is pre-installed.

Not sure what to choose?

  • The Holoscan container image on NGC it the safest way to ensure all the dependencies are present with the expected versions (including torch 8). It is the simplest way to run the embedded examples, while still allowing you to create your own C++ and Python Holoscan application on top of it. These benefits come with the standard inconvenience that exist when using Docker, such as more complex run instructions for proper configuration. Also, supporting Rivermax or the CLI require more work than the other solutions at this time.

  • If you are confident in your ability to manage dependencies on your own in your host environment, the Holoscan Debian package should provide all the capabilities needed to use the Holoscan SDK.

  • If you are not interested in the C++ API but just need to work in Python, you can use the Holoscan python wheels. While they are the easiest solution to get started, you might need additional work to setup your environment with adequate dependencies depending on your needs.

Docker Container

Debian Package

Python Wheels

Runtime libraries Included Included Included
Python module 3.8 3.8 3.8
C++ headers and
CMake config
Included Included N/A
Examples (+ source) Included Included retrieve from
GitHub
Sample datasets Included retrieve from Container retrieve from Container
CUDA runtime 1 Included automatically 2
installed
require manual
installation
NPP support 3 Included automatically 2
installed
require manual
installation
TensorRT support 4 Included automatically 2
installed
require manual
installation
Vulkan support 5 Included automatically 2
installed
require manual
installation
V4L2 support 6 Included automatically 2
installed
require manual
installation
Torch support 7 Included require manual 8
installation
require manual 8
installation
ONNX Runtime support 9 Included require manual 10
installation
require manual 10
installation
MOFED RoCE support 11 Included require manual 13
installation
require manual 13
installation
Rivermax support 11 add on top 12
of the image
require manual 13
installation
require manual 13
installation
CLI support needs docker dind
with buildx plugin
on top of the image
needs docker w/
buildx plugin
needs docker w/
buildx plugin

Need more control over the SDK?

The Holoscan SDK source repository is open-source and provides reference implementations as well as infrastructure for building the SDK yourself.

Attention

We only recommend building the SDK from source if you need to build it with debug symbols or other options not used as part of the published packages. If you want to write your own operator or application, you can use the SDK as a dependency (and contribute to HoloHub). If you need to make other modifications to the SDK, file a feature or bug request.


[8](1,2,3)

To install LibTorch and TorchVision, either build them from source, download our pre-built packages, or copy them from the holoscan container (in /opt). WARNING: not supported in LWS2 - inquire if needed.

[1]

CUDA 12 is required. Already installed on NVIDIA IGX developer kits.

[2](1,2,3,4,5)

Debian installation on x86_64 requires the latest cuda-keyring package to automatically install all dependencies.

[3]

NPP 12 needed for the FormatConverter and BayerDemosaic operators. Already installed on NVIDIA IGX developer kits.

[4]

TensorRT 8.6.1+ and cuDNN needed for the Inference operator. Already installed on NVIDIA IGX developer kits.

[5]

Vulkan 1.3.204+ loader needed for the HoloViz operator (+ libegl1 for headless rendering). Already installed on NVIDIA IGX developer kits.

[6]

V4L2 1.22+ needed for the V4L2 operator. Already installed on NVIDIA IGX developer kits.

[7]

Torch support requires LibTorch 2.1+, TorchVision 0.16+, OpenBLAS 0.3.20+, OpenMPI (aarch64 only), MKL 2021.1.1 (x86_64 only), libpng and libjpeg.

[9]

ONNXRuntime 1.15.1+ needed for the Inference operator. Note that ONNX models are also supported through the TensoRT backend of the Inference Operator.

[10](1,2)

To install ONNXRuntime, either build it from source, download our pre-built package with CUDA 12 and TensoRT execution provider support, or copy it from the holoscan container (in /opt/onnxruntime). WARNING: not supported in LWS2 - inquire if needed.

[11](1,2)

Tested with MOFED 23.10 and Rivermax SDK 1.40.11

[13](1,2,3,4)

Rivermax SDK and OFED drivers and libraries are installed on NVIDIA developer kits with SDKM though the Rivermax SDK program

[12]

Ensure to install the Rivermax license file on the host and mount it in the container

Previous Getting Started with Holoscan
Next Additional Setup
© Copyright 2022-2024, NVIDIA. Last updated on Feb 7, 2025.