Customization

Different levels of customization can be done with the provided reference application:

  • Model level: Fine-tune or replace the perception model(s) to expand the application beyond detecting people.

  • Application level: Leverage the provided microservices and APIs to build your own microservices adding to this application or create new applications.

  • Microservice level: Modify the provided microservices from the source code with provided functionalities.


Model

You can update, variant-switch, or replace the provided model in the Perception (DeepStream) microservice. It’s for people detection.

More details on their comparison and how to enable are in the Model Combination Guide.

You can use the NVIDIA TAO Toolkit for re-training/fine-tuning the provided models with your own data. More details can be found in each model’s respective documentation page.

You can also export your own models as ONNX files or TensorRT engines that can be deployed to the DeepStream perception pipeline. More details on configuring the perception pipeline can be found in its Configuration page.


Application

The full application is modularized. You can build you own microservices and integrate with the rest.

Build Your Own Microservice

Data flow is essential for an application and data is transmitting between microservices via Kafka message broker and Elasticsearch database in this reference application.

You can build your own streaming microservices by consuming the Kafka messages or batch analytics microservices by reading data from Elasticsearch database.

As a reference, the following ports are used during the deployment, and users can leverage for any potential integration:

  • Calibration-Toolkit - 8003

  • Default Kafka port - 9092

  • Default ZooKeeper ports - 2181

  • Elasticsearch and Kibana (ELK) - 9200 and 5601, respectively

  • Jupyter Lab - 8888

  • NVStreamer - 31000

  • Triton - 8000 (HTTP), 8001 (GRPC)

  • VST - 30000

  • Web-API - 8081

  • Web-UI - 3002

In addition to accessing the streaming data flow and archived database, there are provided web API endpoints on various aggregated analytics tasks that you can leverage in building your own microservices.

Our reference UI building upon the web APIs to create useful functionalities and visualization and you can also try out the provided web APIs and potentially use them in your own application.

Try out web APIs in notebooks

We have provided a Jupyter Notebook for user to understand and test out the provided APIs with ease. Please refer to the API Tutorial Notebook section for more details.

Modify The Reference Architecture

Recall the provided reference app pipeline from Overview. Within the metropolis-apps-standalone-deployment/docker-compose/ directory, all microservices to be deployed are defined in foundational/mdx-foundational.yml and people-analytics-app/mdx-people-analytics-app.yml files.

You can modify the application pipeline, for example removing an existing microservice or adding your own microservice, by editing those 2 files.

For example, if you want to replace the provided UI with your own developed UI, you can modify the web-ui section in mdx-people-analytics-app.yml.


Microservice (Advanced)

To support the potential customization of the microservices themselves (beyond configuration), we provides microservice sample source code in the metropolis-apps-standalone-deployment/modules/ directory:

  • perception/ for the Perception (DeepStream) microservice.

  • behavior-analytics/ for the Behavior Analytics microservice. For the Occupancy Analytics application, the PeopleAnalytics.scala class in src/main/scala/example/ is run.

  • behavior-learning/ for the Behavior Learning microservice. Behavior Learning runs the Ingestion and ModelTraining classes.

  • analytics-tracking-web-api/ for the Analytics and Tracking API microservice. Web API microservice is started with the index.js.

  • analytics-tracking-web-ui/ for the reference Multi-Camera Tracking UI microservice.