Customization

Different levels of customization can be done with the provided reference application:

  • Model level: Fine-tune or replace the perception model(s) to expand the application beyond detecting people.

  • Application level: Leverage the provided microservices and APIs to build your own microservices adding to this application or create new applications.

  • Microservice level: Modify the provided microservices from the source code with provided functionalities.


Model

You can update, variant-switch, or replace the 2 provided models in the Perception (DeepStream) microservice. They’re for people detection and people re-identification, respectively.

More details on their comparison and how to enable are in the Model Combination Guide.

You can use the NVIDIA TAO Toolkit for re-training/fine-tuning the provided models with your own data. More details can be found in each model’s respective documentation page.

You can also export your own models as ONNX files or TensorRT engines that can be deployed to the DeepStream perception pipeline. More details on configuring the perception pipeline can be found in its Configuration page.

Here is an overview. Refer to RTLS’s Operation Parameters for the path of the config files mentioned below.

Detection

To deploy the people detection model (PGIE) in your Docker Compose setup, follow these steps:

  • Copy the ONNX file into the Docker container of the Perception microservice. The provided ONNX file will be automatically converted to an engine file as Deepstream uses engine files for inference.

    • Go to the .Dockerfile present in metropolis-apps-standalone-deployment/docker-compose/rtls-app/Dockerfiles/deepstream-<type>.Dockerfile where <type> can be transformer or cnn.

    • Add the command: COPY /your/file/path/new_model.onnx ./ in this file to copy your model.

  • Edit the following files present under metropolis-apps-standalone-deployment/docker-compose/rtls-app/deepstream/configs/<type>-models where <type> can be transformer or cnn.

    • Edit the YAML file of the PGIE configuration: ds-mtmc-pgie-config.yml:

      • Change the following parameters as needed:

        • If you have a ONNX file, update the onnx-file parameter with the complete and valid file path pointing to your model.

        • If your model is quantized and supports int8 inference, update the int8-calib-file & network-mode as follows:

          • int8-calib-file should be a valid file path pointing to your model.

          • network-mode should be set to 1 for running the model in int8 mode.

      • Further adjust any other configuration parameters in the same file as needed.

    • Edit ds-main-config.txt:

      • Update the config-file under [primary-gie] section.

For details on perception container and its configurations, refer to this section. For details on K8s deployments, refer to this section.

Re-Identification

To deploy the re-identification model in your Docker Compose setup, follow these steps:

  • Copy the ONNX file into the Docker container of the Perception microservice. The provided ONNX file will be automatically converted to an engine file as Deepstream uses engine files for inference.

    • Go to the .Dockerfile present in metropolis-apps-standalone-deployment/docker-compose/rtls-app/Dockerfiles/deepstream-<type>.Dockerfile where <type> can be transformer or cnn.

    • Add the command: COPY /your/file/path/new_model.onnx ./ in this file to copy your model.

  • Edit the following files present under metropolis-apps-standalone-deployment/docker-compose/rtls-app/deepstream/configs/<type>-models where <type> can be transformer or cnn.

    • Edit the YAML file of the tracker configuration: ds-nvdcf-accuracy-tracker-config.yml:

      • Change the following parameters as needed:

        • If you have a ONNX file, add/update the onnxFile parameter with the complete and valid file path pointing to your model.

        • If your model is quantized and supports int8 inference, refer to Re-Identification Model Calibration section of the NvTracker page for additional details.

      • Further adjust any other configuration parameters in the same file as needed.

For details on perception container and its configurations, refer to this section. For details on K8s deployments, refer to this section.


Application

The full application is modularized. You can build you own microservices and integrate with the rest.

Build Your Own Microservice

Data flow is essential for an application and data is transmitting between microservices via Kafka message broker and Elasticsearch database in this reference application.

You can build your own streaming microservices by consuming the Kafka messages or batch analytics microservices by reading data from Elasticsearch database.

As a reference, the following ports are used during the deployment, and users can leverage for any potential integration:

  • Calibration-Toolkit - 8003

  • Default Kafka port - 9092

  • Default ZooKeeper ports - 2181

  • Elasticsearch and Kibana (ELK) - 9200 and 5601, respectively

  • Jupyter Lab - 8888

  • NVStreamer - 31000

  • VST - 30000

  • Web-API - 8081

  • Web-UI - 3003

In addition to accessing the streaming data flow and archived database, there are provided web API endpoints on various aggregated analytics tasks that you can leverage in building your own microservices.

Our reference UI building upon the web APIs to create useful functionalities and visualization and you can also try out the provided web APIs and potentially use them in your own application.

For in-depth documentation of the APIs, refer to the Analytics and Tracking API section.

Modify The Reference Architecture

Recall the provided reference app pipeline from Overview. Within the metropolis-apps-standalone-deployment/docker-compose/ directory, all microservices to be deployed are defined in foundational/mdx-foundational.yml and rtls-app/mdx-rtls-app.yml files.

You can modify the application pipeline, for example removing an existing microservice or adding your own microservice, by editing those 2 files.

For example, if you want to replace the provided UI with your own developed UI, you can modify the web-ui section in mdx-rtls-app.yml.

Customize to a Different Object Class

The PGIE model for object detection and the re-identification model in the DeepStream perception pipeline need to be replaced for the object class of interest. For example, the TrafficCamNet can be used as PGIE for cars, and the object class in PGIE config needs to be set accordingly. The app config parameters in RTLS microservice, particularly the filtering thresholds for bounding box sizes and aspect ratios, also need to be adjusted accordingly. You can use the NVIDIA TAO Toolkit for re-training the models with your own data of the object class(es) of interest.


Microservice (Advanced)

To support the potential customization of the microservices themselves (beyond configuration), we provides microservice sample source code in the metropolis-apps-standalone-deployment/modules/ directory: