Docker Containers

DeepStream 6.2 provides Docker containers for both dGPU and Jetson platforms. These containers provide a convenient, out-of-the-box way to deploy DeepStream applications by packaging all associated dependencies within the container. The associated Docker images are hosted on the NVIDIA container registry in the NGC web portal at https://ngc.nvidia.com. They use the nvidia-docker package, which enables access to the required GPU resources from containers. This section describes the features supported by the DeepStream Docker container for the dGPU and Jetson platforms.

Note

The DeepStream 6.2 containers for dGPU and Jetson are distinct, so you must get the right image for your platform.

Note

With DS 6.2, DeepStream docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams/files like mp4 that include audio track. Run the below script inside the docker images to install additional packages (e.g. gstreamer1.0-libav, g`streamer1.0-plugins-good`, gstreamer1.0-plugins-bad, gstreamer1.0-plugins-ugly as required) that might be necessary to use all of the DeepStreamSDK features : /opt/nvidia/deepstream/deepstream/user_additional_install.sh

Prerequisites

  1. Install docker-ce by following the official instructions.

    Once you have installed docker-ce, follow the post-installation steps to ensure that the docker can be run without sudo.

  2. Install nvidia-container-toolkit by following the install-guide.

  3. Get an NGC account and API key:

    1. Go to NGC and search the DeepStream in the Container tab. This message is displayed: “Sign in to access the PULL feature of this repository”.

    2. Enter your Email address and click Next, or click Create an Account.

    3. Choose your organization when prompted for Organization/Team.

    4. Click Sign In.

  4. Log in to the NGC docker registry (nvcr.io) using the command docker login nvcr.io and enter the following credentials:

    a. Username: "$oauthtoken"
    b. Password: "YOUR_NGC_API_KEY"
    

    where YOUR_NGC_API_KEY corresponds to the key you generated from step 3.

Sample commands to run a docker container

# Pull the required docker.  Refer Docker Containers table to get docker container name.
$ docker pull <required docker container name>
# Step to run the docker
$ export DISPLAY=:0
$ xhost +
$ docker run -it --rm --net=host --gpus all -e DISPLAY=$DISPLAY --device /dev/snd -v /tmp/.X11-unix/:/tmp/.X11-unix <required docker container name>

A Docker Container for dGPU

The Containers page in the NGC web portal gives instructions for pulling and running the container, along with a description of its contents. The dGPU container is called deepstream and the Jetson container is called deepstream-l4t. Unlike the container in DeepStream 3.0, the dGPU DeepStream 6.2 container supports DeepStream application development within the container. It contains the same build tools and development libraries as the DeepStream 6.2 SDK. In a typical scenario, you build, execute and debug a DeepStream application within the DeepStream container. Once your application is ready, you can use the DeepStream 6.2 container as a base image to create your own Docker container holding your application files (binaries, libraries, models, configuration file, etc.,). Here is an example snippet of Dockerfile for creating your own Docker container:

# Replace with required container type e.g. base, devel etc in the following line
FROM nvcr.io/nvidia/deepstream:6.2-<container type>
COPY myapp  /root/apps/myapp
# To get video driver libraries at runtime (libnvidia-encode.so/libnvcuvid.so)
ENV NVIDIA_DRIVER_CAPABILITIES $NVIDIA_DRIVER_CAPABILITIES,video

This Dockerfile copies your application (from directory mydsapp) into the container (pathname /root/apps). Note that you must ensure the DeepStream 6.2 image location from NGC is accurate.

Note

The current release only provides triton dockers for NVAIE customers.

Table below lists the docker containers for dGPU released with DeepStream 6.2:

Docker Containers for dGPU

Container

Container pull commands

Triton Inference Server docker with Triton Inference Server and dependencies installed along with a development environment for building DeepStream applications

docker pull nvcr.io/nvaie/deepstream-3-1:6.2-triton_nvaie [for NVAIE ]

See the DeepStream 6.2 Release Notes for information regarding nvcr.io authentication and more.

Note

See the dGPU container on NGC for more details and instructions to run the dGPU containers.

A Docker Container for Jetson

As of JetPack release 4.2.1, NVIDIA Container Runtime for Jetson has been added, enabling you to run GPU-enabled containers on Jetson devices. Using this capability, DeepStream 6.2 can be run inside containers on Jetson devices using Docker images on NGC. Pull the container and execute it according to the instructions on the NGC Containers page. The DeepStream container no longer expects CUDA, TensorRT to be installed on the Jetson device, because it is included within the container image. Make sure that the BSP is installed using JetPack and nvidia-container tools installed from Jetpack or apt server (See instructions below) on your Jetson prior to launching the DeepStream container. The Jetson Docker containers are for deployment only. They do not support DeepStream software development within a container. You can build applications natively on the Jetson target and create containers for them by adding binaries to your docker images. Alternatively, you can generate Jetson containers from your workstation using instructions in the Building Jetson Containers on an x86 Workstation section in the NVIDIA Container Runtime for Jetson documentation. The table below lists the docker containers for Jetson released with DeepStream 6.2:

Docker Containers for Jetson

Container

Container pull commands

Base docker (contains only the runtime libraries and GStreamer plugins. Can be used as a base to build custom dockers for DeepStream applications)

docker pull nvcr.io/nvidia/deepstream-l4t:6.2-base

DeepStream IoT docker with deepstream-test5-app installed and all other reference applications removed.

docker pull nvcr.io/nvidia/deepstream-l4t:6.2-iot

DeepStream samples docker (contains the runtime libraries, GStreamer plugins, reference applications and sample streams, models and configs)

docker pull nvcr.io/nvidia/deepstream-l4t:6.2-samples

DeepStream Triton docker (contains contents of the samples docker plus devel libraries and Triton Inference Server backends)

docker pull nvcr.io/nvidia/deepstream-l4t:6.2-triton

See the DeepStream 6.2 Release Notes for information regarding nvcr.io authentication and more.

Note

See the Jetson container on NGC for more details and instructions to run the Jetson containers.

Creating custom DeepStream docker for dGPU using DeepStreamSDK package

Note

This section is not applicable for NVAIE customers at this time.

Following is the sample Dockerfile to create custom DeepStream docker for dGPU using either DeepStream Debian or tar package.

Note

The Dockerfile below might be installing additional packages that are not present in the public DeepStream docker image releases on NGC.

FROM nvcr.io/nvidia/cuda:11.8.0-devel-ubuntu20.04
# Set TENSORRT_VERSION, example: 8.5.2-1+cuda11.8
ARG TENSORRT_VERSION
# Set CUDNN_VERSION, example: 8.7.0.84-1+cuda11.8
ARG CUDNN_VERSION


# Add open GL libraries
RUN apt-get update && \
        DEBIAN_FRONTEND=noninteractive  apt-get install -y --no-install-recommends \
        pkg-config \
        libglvnd-dev \
        libgl1-mesa-dev \
        libegl1-mesa-dev \
        libgles2-mesa-dev

RUN apt-get update && \
        DEBIAN_FRONTEND=noninteractive  apt-get install -y \
        wget \
        libyaml-cpp-dev \
        gnutls-bin

RUN apt-get update && \
        DEBIAN_FRONTEND=noninteractive         apt-get install -y --no-install-recommends \
        linux-libc-dev \
        libglew2.1 libssl1.1 libjpeg8 libjson-glib-1.0-0 \
        gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-libav \
        gstreamer1.0-alsa \
        libcurl4 \
        libuuid1 \
        libjansson4 \
        libjansson-dev \
        librabbitmq4 \
        libgles2-mesa \
        libgstrtspserver-1.0-0 \
        libv4l-dev \
        gdb bash-completion libboost-dev \
        uuid-dev libgstrtspserver-1.0-0 libgstrtspserver-1.0-0-dbg libgstrtspserver-1.0-dev \
        libgstreamer1.0-dev \
        libgstreamer-plugins-base1.0-dev \
        libglew-dev \
        libssl-dev \
        libopencv-dev \
        freeglut3-dev \
        libjpeg-dev \
        libcurl4-gnutls-dev \
        libjson-glib-dev \
        libboost-dev \
        librabbitmq-dev \
        libgles2-mesa-dev \
        pkg-config \
        libxau-dev \
        libxdmcp-dev \
        libxcb1-dev \
        libxext-dev \
        libx11-dev \
        libnss3 \
        linux-libc-dev \
        git \
        wget \
        gnutls-bin \
        sshfs \
        python3-distutils \
        python3-apt \
        python \
        rsyslog \
        vim  rsync \
        gstreamer1.0-rtsp \
        libcudnn8=${CUDNN_VERSION} \
        libcudnn8-dev=${CUDNN_VERSION} \
        libnvinfer8=${TENSORRT_VERSION} \
        libnvinfer-dev=${TENSORRT_VERSION} \
        libnvparsers8=${TENSORRT_VERSION} \
        libnvparsers-dev=${TENSORRT_VERSION} \
        libnvonnxparsers8=${TENSORRT_VERSION} \
        libnvonnxparsers-dev=${TENSORRT_VERSION} \
        libnvinfer-plugin8=${TENSORRT_VERSION} \
        libnvinfer-plugin-dev=${TENSORRT_VERSION} \
        python3-libnvinfer=${TENSORRT_VERSION} \
        python3-libnvinfer-dev=${TENSORRT_VERSION} && \
        rm -rf /var/lib/apt/lists/* && \
        apt autoremove

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
    libx11-xcb-dev \
    libxkbcommon-dev \
    libwayland-dev \
    libxrandr-dev \
    libegl1-mesa-dev && \
    rm -rf /var/lib/apt/lists/*

# Install DeepStreamSDK using Debian package. DeepStream tar package can also be installed in a similar manner
ADD deepstream-6.2_6.2.0-1_amd64.deb /root

RUN apt-get update && \
      DEBIAN_FRONTEND=noninteractive  apt-get install -y --no-install-recommends \
      /root/deepstream-6.2_6.2.0-1_amd64.deb

WORKDIR /opt/nvidia/deepstream/deepstream

RUN ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1 /usr/lib/x86_64-linux-gnu/libnvcuvid.so
RUN ln -s /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.1 /usr/lib/x86_64-linux-gnu/libnvidia-encode.so

# To get video driver libraries at runtime (libnvidia-encode.so/libnvcuvid.so)
ENV NVIDIA_DRIVER_CAPABILITIES $NVIDIA_DRIVER_CAPABILITIES,video,compute,graphics,utility

Build docker using the following command:

docker build -t deepstream:dgpu --build-arg TENSORRT_VERSION="8.5.2-1+cuda11.8" --build-arg CUDNN_VERSION="8.7.0.84-1+cuda11.8" --build-arg CUDA_VERSION="11.8" .

Note

Ensure Dockerfile and DS package is present in the directory used to build the docker.

Creating custom DeepStream docker for Jetson using DeepStreamSDK package

Note

This section is not applicable for NVAIE customers at this time.

Following is the sample Dockerfile to create custom DeepStream docker for Jetson using tar package. Note: The Dockerfile below might be installing additional packages that are not present in the public DeepStream docker image releases on NGC.

# Use L4T tensorrt docker listed on https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-tensorrt/tags
# Use r8.5.2.2 for DS 6.2.0
FROM nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-runtime

#Install vpi-dev and vpi-lib
RUN apt-get update && \
        DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
        libnvvpi2 vpi2-dev vpi2-samples && \
        rm -rf /var/lib/apt/lists/* && \
        apt autoremove

# Install dependencies
RUN apt-get update && \
        DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
        rsyslog git \
        tzdata \
        libgstrtspserver-1.0-0 \
        libjansson4 \
        libglib2.0-0 \
        libjson-glib-1.0-0 \
        librabbitmq4 \
        gstreamer1.0-rtsp \
        libyaml-cpp-dev \
        libyaml-cpp0.6 \
        libcurl4-openssl-dev \
        ca-certificates && \
        rm -rf /var/lib/apt/lists/* && \
        apt autoremove


# adding Cuda missing symlinks in base docker
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcufft.so.10 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcufft.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcublas.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcublas.so
RUN ln -s /usr/lib/aarch64-linux-gnu/libcudnn.so.8 /usr/lib/aarch64-linux-gnu/libcudnn.so
RUN ln -s /usr/local/cuda-11.4/lib64/libcudart.so.11.0 /usr/local/cuda-11.4/lib64/libcudart.so
# Nvinfer libs:
RUN ln -s /usr/lib/aarch64-linux-gnu/libnvinfer.so.8 /usr/lib/aarch64-linux-gnu/libnvinfer.so
RUN ln -s /usr/lib/aarch64-linux-gnu/libnvparsers.so.8 /usr/lib/aarch64-linux-gnu/libnvparsers.so
RUN ln -s /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
RUN ln -s /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8 /usr/lib/aarch64-linux-gnu/libnvonnxparser.so
RUN ln -s /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so.8 /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so
# NPP libs:
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppc.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppc.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppial.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppial.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppicc.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppicc.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppidei.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppidei.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppif.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppif.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppig.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppig.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppim.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppim.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppist.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppist.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppisu.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppisu.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppitc.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnppitc.so
RUN ln -s /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnpps.so.11 /usr/local/cuda-11.4/targets/aarch64-linux/lib/libnpps.so


RUN ldconfig

# Install DeepStreamSDK using tar package.
ENV DS_REL_PKG deepstream_sdk_v6.2.0_jetson.tbz2

COPY "${DS_REL_PKG}"  \
/

RUN DS_REL_PKG_DIR="${DS_REL_PKG%.tbz2}" && \
cd / && \
tar -xvf "${DS_REL_PKG}" -C / && \
cd /opt/nvidia/deepstream/deepstream && \
./install.sh && \
cd / && \
rm -rf "/${DS_REL_PKG}"

RUN ldconfig

CMD ["/bin/bash"]
WORKDIR /opt/nvidia/deepstream/deepstream

ENV LD_LIBRARY_PATH /usr/local/cuda-11.4/lib64
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all

ENV LD_PRELOAD /usr/lib/aarch64-linux-gnu/libgomp.so.1:$LD_PRELOAD

Build docker using the following command:

docker build -t deepstream:jetson .

Note

Ensure Dockerfile and DS package is present in the directory used to build the docker. Also, docker can be created using DeepStream tar package only, not Debian.


Note

This section is not applicable for NVAIE customers at this time.

Since Jetpack 5.0.2 GA, NVIDIA Container Runtime no longer mounts user level libraries like CUDA, cuDNN and TensorRT from the host. These will instead be installed inside the containers.

What does this mean for DS users?

  1. New DS Dockers thus take up double the space compared to previous Jetson dockers.

  2. DS 6.2 dockers will run on Jetpack 5.1 GA only.

  3. Older DS dockers are not compatible with Jetpack 5.1 GA.