Docker Containers#

DeepStream 7.1 provides Docker containers for dGPU on both x86 and ARM platforms (like SBSA, GH100, etc.,) and Jetson platforms. These containers provide a convenient, out-of-the-box way to deploy DeepStream applications by packaging all associated dependencies within the container. The associated Docker images are hosted on the NVIDIA container registry in the NGC web portal at https://ngc.nvidia.com. They use the nvidia-docker package, which enables access to the required GPU resources from containers. This section describes the features supported by the DeepStream Docker container for dGPU on both x86 and ARM and Jetson platforms.

Note

The DeepStream 7.1 containers for dGPU on x86 and ARM (SBSA) and Jetson are distinct, so you must get the right image for your platform.

Note

With DS 7.1, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Run the below script inside the docker images to install additional packages (e.g. gstreamer1.0-libav, gstreamer1.0-plugins-good, gstreamer1.0-plugins-bad, gstreamer1.0-plugins-ugly as required) that might be necessary to use all of the DeepStreamSDK features: /opt/nvidia/deepstream/deepstream/user_additional_install.sh

Note

  • The script prepare_classification_test_video.sh present at /opt/nvidia/deepstream/deepstream/samples requires ffmpeg to be installed. Some of the low level codec libraries need to be re-installed along with ffmpeg.

  • Use the following command to install/re-install ffmpeg: apt-get install --reinstall libflac8 libmp3lame0 libxvidcore4 ffmpeg

Prerequisites#

  1. Install docker-ce by following the official instructions.

    Once you have installed docker-ce, follow the post-installation steps to ensure that the docker can be run without sudo.

  2. Install nvidia-container-toolkit by following the install-guide.

  3. Get an NGC account and API key:

    1. Go to NGC and search the DeepStream in the Container tab. This message is displayed: “Sign in to access the PULL feature of this repository”.

    2. Enter your Email address and click Next, or click Create an Account.

    3. Choose your organization when prompted for Organization/Team.

    4. Click Sign In.

  4. Log in to the NGC docker registry (nvcr.io) using the command docker login nvcr.io and enter the following credentials:

    a. Username: "$oauthtoken"
    b. Password: "YOUR_NGC_API_KEY"
    

    where YOUR_NGC_API_KEY corresponds to the key you generated from step 3.

Sample commands to run a docker container:

# Pull the required docker.  Refer Docker Containers table to get docker container name.
$ docker pull <required docker container name>
# Step to run the docker
$ export DISPLAY=:0
$ xhost +
$ docker run -it --rm --net=host --gpus all -e DISPLAY=$DISPLAY --device /dev/snd -v /tmp/.X11-unix/:/tmp/.X11-unix <required docker container name>

A Docker Container for dGPU#

The Containers page in the NGC web portal gives instructions for pulling and running the container, along with a description of its contents. The dGPU container is called deepstream and the Jetson container is called deepstream-l4t. Unlike the container in DeepStream 3.0, the dGPU DeepStream 7.1 container supports DeepStream application development within the container. It contains the same build tools and development libraries as the DeepStream 7.1 SDK. In a typical scenario, you build, execute and debug a DeepStream application within the DeepStream container. Once your application is ready, you can use the DeepStream 7.1 container as a base image to create your own Docker container holding your application files (binaries, libraries, models, configuration file, etc.,). Here is an example snippet of Dockerfile for creating your own Docker container:

FROM nvcr.io/nvidia/deepstream:7.1-<container type>
COPY myapp  /root/apps/myapp
# To get video driver libraries at runtime (libnvidia-encode.so/libnvcuvid.so)
ENV NVIDIA_DRIVER_CAPABILITIES $NVIDIA_DRIVER_CAPABILITIES,video

This Dockerfile copies your application (from directory mydsapp) into the container (pathname /root/apps). Note that you must ensure the DeepStream 7.1 image location from NGC is accurate.

Table below lists the docker containers for dGPU released with DeepStream 7.1:

Docker Containers for dGPU#

Container

Container pull commands

Triton devel docker (contains the entire SDK along with a development environment for building DeepStream applications and graph composer)

docker pull nvcr.io/nvidia/deepstream:7.1-gc-triton-devel

Triton Inference Server docker with Triton Inference Server and dependencies installed along with a development environment for building DeepStream applications

docker pull nvcr.io/nvidia/deepstream:7.1-triton-multiarch

DeepStream samples docker (contains the runtime libraries, GStreamer plugins, reference applications and sample streams, models and configs)

docker pull nvcr.io/nvidia/deepstream:7.1-samples-multiarch

See the DeepStream 7.1 Release Notes for information regarding nvcr.io authentication and more.

Note

See the dGPU container on NGC for more details and instructions to run the dGPU containers.

Suggested Setup for Video Subsystem on x86 dGPU docker#

Note

This does not affect dGPUs which utilize Nvidia Driver 560 series on RTX series.

DeepStream 7.1 supports Cuda-12.6 in the compute stack and also in the docker container by default. Data Center GPUs are currently only supported by driver 535.183.06, which comes with Cuda-12.2 driver by default.

Even though CUDA supports forward compatibility with newer runtime versions like Cuda-12.6, other components such as Cuda-GL Interop do not support forward compatibility and might not work as expected which are required for display sink to work. Hence, for other components such as Cuda-GL Interop to work, Cuda-12.2 toolkit also must be installed along with the default Cuda-12.6 runtime inside the docker container.

For users who need to use/enable display output, following steps are suggested inside the docker to implement this workaround:

  1. Start the docker as shown in the step above.

  2. Install cuda-toolkit-12-2. Please follow these instructions

  3. PLEASE NOTE: From the Cuda-12 installation instructions mentioned in link above, please replace : sudo apt-get -y install cuda with sudo apt-get -y install cuda-toolkit-12-2

  4. Change default CUDA version to point to Cuda-12.2 inside the docker using update alternatives: update-alternatives --set cuda /usr/local/cuda-12.2

  5. To check which version of CUDA is currently in use inside the docker, run : update-alternatives --display cuda

Note

Similar limitation is also present for dGPU driver 550, which comes with Cuda-12.4 by default and Cuda-12.6 based docker containers which need to use/enable display.
For users who need to use/enable display on systems using dGPU driver 550, please enable cuda-toolkit-12-4 in addition to Cuda-12.6 inside the container by following similar steps as above.
Replace step no. 2 with cuda-toolkit-12-4 installation instructions found here

A Docker Container for Jetson#

As of JetPack release 4.2.1, NVIDIA Container Runtime for Jetson has been added, enabling you to run GPU-enabled containers on Jetson devices. Using this capability, DeepStream 7.1 can be run inside containers on Jetson devices using Docker images on NGC. Pull the container and execute it according to the instructions on the NGC Containers page. The DeepStream container no longer expects CUDA, TensorRT to be installed on the Jetson device, because it is included within the container image. Make sure that the BSP is installed using JetPack and nvidia-container tools installed from Jetpack or apt server (See instructions below) on your Jetson prior to launching the DeepStream container. The Jetson Docker containers are for deployment only. They do not support DeepStream software development within a container. You can build applications natively on the Jetson target and create containers for them by adding binaries to your docker images. Alternatively, you can generate Jetson containers from your workstation using instructions in the Building Jetson Containers on an x86 Workstation section in the NVIDIA Container Runtime for Jetson documentation. The table below lists the docker containers for Jetson released with DeepStream 7.1:

Docker Containers for Jetson#

Container

Container pull commands

DeepStream samples docker (contains the runtime libraries, GStreamer plugins, reference applications and sample streams, models and configs)

docker pull nvcr.io/nvidia/deepstream:7.1-samples-multiarch

DeepStream Triton docker (contains contents of the samples docker plus devel libraries and Triton Inference Server backends)

docker pull nvcr.io/nvidia/deepstream:7.1-triton-multiarch

Note

For the Jetson Triton Container an error message is printed “Failed to detect NVIDIA driver version” when running the docker. No impact on functionality is observed currently.

See the DeepStream 7.1 Release Notes for information regarding nvcr.io authentication and more.

Note

See the Jetson container on NGC for more details and instructions to run the Jetson containers.

A Docker Container for dGPU on ARM (IGX/dGPU, GH100, GH200, SBSA)#

The Containers page in the NGC web portal gives instructions for pulling and running the container, along with a description of its contents. The dGPU on ARM container is called deepstream:<version>-triton-arm-sbsa and the Jetson container is called deepstream-l4t. Unlike the container in DeepStream 3.0, the dGPU DeepStream 7.1 container supports DeepStream application development within the container. It contains the same build tools and development libraries as the DeepStream 7.1 SDK. In a typical scenario, you build, execute and debug a DeepStream application within the DeepStream container. Once your application is ready, you can use the DeepStream 7.1 container as a base image to create your own Docker container holding your application files (binaries, libraries, models, configuration file, etc.,). Here is an example snippet of Dockerfile for creating your own Docker container:

FROM nvcr.io/nvidia/deepstream:7.1-<container type>
COPY myapp  /root/apps/myapp
# To get video driver libraries at runtime (libnvidia-encode.so/libnvcuvid.so)
ENV NVIDIA_DRIVER_CAPABILITIES $NVIDIA_DRIVER_CAPABILITIES,video

This Dockerfile copies your application (from directory mydsapp) into the container (pathname /root/apps). Note that you must ensure the DeepStream 7.1 image location from NGC is accurate.

Table below lists the docker containers for dGPU on ARM released with DeepStream 7.1:

Docker Containers for dGPU on ARM#

Container

Container pull commands

Triton Inference Server docker with Triton Inference Server and dependencies installed along with a development environment for building DeepStream applications

docker pull nvcr.io/nvidia/deepstream:7.1-triton-arm-sbsa

See the DeepStream 7.1 Release Notes for information regarding nvcr.io authentication and more.

Note

See the dGPU on ARM container on NGC for more details and instructions to run the dGPU on ARM (SBSA) containers.

Known Limitation with Video Subsystem and Workaround#

With DS 7.1 arm sbsa docker, video display will not work by default on dGPU on ARM systems. Please see section Known Limitation with Video Subsystem and Workaround for details and workaround.

Creating custom DeepStream dockers for dGPU or Jetson using DeepStreamSDK package#

Note

See the DeepStream Dockerfile Guide on GitHub for more details.