Container Builder =================== Container Builder (CB) is used to build docker images for AI Application graphs created using Composer. In addition to docker images, it can also push the final image into the cloud for deployment. Container Builder interacts with :doc:`GraphComposer_Registry`: to: - Download extensions and other related files into your local system. - Copy other required files specified in the config file to generate an intermediate work folder and an optimized dockerfile. - Convert archives/packages dependencies and instructions into docker and try to build a minimal sized local image. For optimization, you can easily configure container builder to support multi-stage docker build. .. image:: /content/Container_builder_overview.png :align: center :alt: Container Builder Container Builder supports graph installing and container image building on x86 Ubuntu systems. It can also build arm64 images from x86_64 platforms - to do this, you will need to install QEMU and ``bintutils``. Prerequisites --------------- 1. Install right docker version https://docs.docker.com/engine/install/ubuntu/ 2. log into the server which you might need pull/push images. Run: :: $ docker login server:port If you need NGC images and resources, follow https://ngc.nvidia.com/setup/api-key to apply permission and get `API_KEY` token. Then run, :: $ docker login nvcr.io 3. Some features (e.g. squash) might need docker experimental support. To enable that, update ``/etc/docker/daemon.json`` and add :: { "experimental": true } Then restart docker by running :: $ sudo systemctl restart docker 4. If you want to build ARM docker images from x86_64 platform, then need to install `QEMU` and `binfmt`. OS restart might be needed. :: $ sudo apt-get install qemu binfmt-support qemu-user-static $ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes To verify if it is working, run :: $ docker run --rm -t arm64v8/ubuntu uname -m 5. Install Graph Composer package. Make sure ``container_builder`` executable binary is installed. Container Builder Features ---------------------------- The image building is based on docker build. Container builder provides different stage models to build the image. There are `compile_stage` and `clean_stage` models for users to select. Some features are applicable for one stage only. For more details, see the feature table. .. csv-table:: Container Builder features :file: ../text/tables/container_builder_features.csv :widths: 40, 40, 20 Container Builder Tool Usage ------------------------------ CB (container_builder) tool has very few input arguments. The config file collects all user settings as a YAML format text file. Briefly to generate a container, users need update the config file and run command line :: $ container_builder -c config.yaml -d graph_target.yaml See more details of config settings from Configuration Specification. The graph target file is the same target file used by registry during graph install. See registry cli graph install documentation for sample file. The default log print level is INFO and output stream displays on screen. log-level, log-file, and other arguments are used for debug. For more details, refer to help option from the following command :: $ container_builder -h Run Container Builder ----------------------- The following are a basic set of steps to build a container using an existing Container Builder configuration file and execute the container. 1. Update the config file to start the build. Open ``/opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-test1/ds_test1_container_builder_dgpu.yaml`` 2. Specify the right base image with correct DeepStream SDK version for the ``graph_files``. If base image is not specified, container builder will attempt to auto select it from a pool of predefined base images in ``/opt/nvidia/graph-composer/container_builder.yaml``. The container which matches the graph target closest will be selected :: base_image: "nvcr.io/nvidia/deepstream:x.x-x" 3. Specify the output image name in ``docker_build`` section :: docker_build: image_name: deepstream_test1_dgpu:nvgcb_test 4. Run Container builder tool to build the image: :: $ container_builder -c ds_test1_container_builder_dgpu.yaml -d /opt/nvidia/graph-composer/config/target_x86_64_cuda_11_4.yaml 5. Verify the image and graph in container, use image in config file :: $ docker run --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix Container Builder Configuration ------------------------------------ The input config file for Container Builder is following YAML1.2 format rules https://yaml.org/spec/1.2/spec.html. There are 2 major YAML document sections in the configuration settings. 1. Container builder main control section - With that, users can specify graph installation options, build/push options and other host side control options. Each config file can have only one control section with key field container_builder: name 2. Container dockerfile stage section - All the sections will be converted into dockerfiles. Users can specify multiple stage sections. There are 2 model templates for different stages. 1. clean_stage model: This is the default model if not specified. The output container image must have a clean_stage section as final stage. Users should keep the final stage as clean as possible. 2. compile_stage model: It is used to do some extra work such as build binaries from source code and to install some compile tools. It should be an intermediate stage, users can specify the clean_stage to copy required binaries from compile_stage. .. note:: You must store private information safely when building docker images from container builders. Learn more details of docker reference https://docs.docker.com/engine/reference/builder/ to avoid exposing critical layers to the public. MSB(Multi-stage build) is one of the best practices to separate internal source code stage and clean public stage. In container builder, users can use compile_stage to quickly start source code compiling and copy results to clean_stage for the final image. More details refer to https://docs.docker.com/develop/develop-images/multistage-build/ A Basic Example of Container Builder Configuration ----------------------------------------------------- This example has 2 sections with a `clean_stage` build section and a main control section. During stage build: * Starts from `base_image` and installs some `debian`, python3 packages into the target image * Installs archives * Copies files from local and other image * Finally do some cleanup and environment settings on output target image. The main control section would install the graph dependencies through registry into the target image. You can specify some build options to control the stage build and finally push the target image into the cloud. Here is the sample code with comments inline. :: # Container dockerfile Stage build section --- unique_stage: final_image # required, name must be unique # base_iamge is required base_image: "nvcr.io/nvidia/deepstream:6.0.1-base" stage_model: clean_stage # Optional # Install debian packages apt_deps: - curl - ca-certificates - tar - python3 - python3-pip # Install pip3 packages pip3_deps: - PyYAML>=5.4.1 # Copy local files to image local_copy_files: - src: "/opt/nvidia/graph-composer/gxe" # dst: "/opt/nvidia/graph-composer/gxe" - src: "/opt/nvidia/graph-composer/libgxf_core.so" # dst: "/opt/nvidia/graph-composer/libgxf_core.so" # Copy files from other images or other stages stage_copy_files: - src_stage: "nvcr.io/nvidia/deepstream:6.0.1-samples" src: "/opt/nvidia/deepstream/deepstream/samples" # dst: "/opt/nvidia/deepstream/deepstream/samples" # Download HTTP archives and install http_archives: - url: https://host:port/archive.tar.bz2 curl_option: "-u user:token" post_cmd: "tar -jxvf archive.tar.bz2 -C /" # Clean up operations custom_runs: - "apt autoremove && ln -s /opt/nvidia/deepstream/deepstream/samples /samples" # Specify WORKDIR work_folder: /workspace/test/ # Specify multiple ENV env_list: PATH: "/opt/nvidia/graph-composer:$PATH" LD_LIBRARY_PATH: "/opt/nvidia/graph-composer/:$LD_LIBRARY_PATH" # specify ENTRYPOINT #entrypoint: ["/opt/nvidia/graph-composer/gxe"] # Container Builder Main Control Section --- # delimiter required container_builder: main # required, any string is ok for name graph: # optional graph_files: [deepstream-test1.yaml] # graph file in local graph_dst: /workspace/test/ # destination path in target image # extension manifest location in target image manifest_dst: /workspace/test/ # extensions installed location in target image ext_install_root: /workspace/test/ # docker build options docker_build: image_name: deepstream_test1:nvgcb_test no_cache: true squash: false # docker push list to cloud, optional # username/password are optional if $docker login already ran docker_push: - url: "nvcr.io/nvidian/user/deepstream_test1:nvgcb_test" Username: password: A Multi-Stage Example ------------------------ This example shows a multi-stage build. The ``download_stage`` within compile_stage model would download all ONNX models from a private git repo with ``netrc`` file for permissions. The final image would copy a specific file out of ``download_stage`` into the final image location. The ``download_stage`` would be lost as some intermediate layers and the final image is clean to keep minimal dependencies and get rid of ``netrc`` files. :: # use compile_stage to download all models through git --- unique_stage: download_stage base_image: "ubuntu:18.04" stage_model: compile_stage # copy netrc file into compile stage for git clone local_copy_files: - src: "/home/user/.netrc" dst: "/root/.netrc" # download models into folder /download/models git_repo_list: - repo_folder: /downloads/models url: https://privatehost/user/models #a private host require netrc tag: master # use clean_stage for final image output --- # Final Stage unique_stage: final_image base_image: "ubuntu:20.04" stage_model: clean_stage # copy a specific file out of download_stage into final_image stage_copy_files: - src_stage: "download_stage" src: "/downloads/models/modelA.onnx" dst: "/data/modelA.onnx" # Container builder main control settings --- # Container Builder Config container_builder: builder_name # required docker_build: image_name: "cb_multi_stage:cb_test" # specify step orders in case multiple stages out of order stage_steps: [download_stage, final_image] no_cache: true Container builder main control section specification ----------------------------------------------------- .. note:: All fields with `/*dst` ends with '/' means that is a folder path on the target image. `/*src` depends on the real source path. .. csv-table:: Container Builder Control Specification :file: ../text/tables/container_builder_control_section_specification.csv :widths: 25, 25, 25, 25 Container dockerfile stage section specification ------------------------------------------------- The table below lists both ``compile_stage`` and clean_stage sections configuration specification. Most fields are common for both stage models. Only clean_stage should be used for the final stage. In addition, users should keep in mind stages of ``compile_stage`` are not optimized and may have extra packages and files not required for final output. .. note:: All fields with `\*dst` ends with '/' means that is a folder path on the target image. `\*src` depends on the real source path. .. csv-table:: Container Builder Dockerfile Stage Specification :file: ../text/tables/container_builder_dockerfile_stage_section_specification.csv :widths: 20, 20, 20, 20, 20