Automatic Accuracy Tuning

Overview

PipeTuner is an automatic accuracy tuning tool designed to optimize the parameters of any video AI and analytics processing pipeline, including the multi-camera tracking workflows, such as Multi-Target Multi-Camera (MTMC) tracking and Real-Time Location System (RTLS). It efficiently explores the high-dimensional parameter space to find the optimal set of parameters that yield the highest accuracy on a given dataset, without requiring in-depth technical knowledge from the user.

Key Features:

  • Supports accuracy tuning for the multi-camera tracking workflows, including for Perception (DeepStream) microservice

  • Users provide a representative dataset and PipeTuner automatically searches for optimal parameters

  • Supports data augmentation to improve robustness of tuned parameters

  • Integrates multiple optimization algorithms to efficiently search the parameter space

  • Provides intuitive configuration files to define the search space for each parameter

PipeTuner works by iteratively executing the perception or multi-camera tracking pipeline with different parameter configurations, evaluating the accuracy metrics (such as MOTA, IDF1, or HOTA) on the provided dataset, and intelligently updating the parameter values using optimization algorithms. This process continues until the desired accuracy is achieved or a specified number of iterations is reached, resulting in an optimized set of parameters tailored to the specific use case and dataset.

Download

PipeTuner is hosted on NGC. Users need to download the following resources to start.

  • PipeTuner Collection: The collection of all Pipetuner resources, including introduction, user guide and setup instructions;

  • PipeTuner Container: PipeTuner docker container;

  • PipeTuner User Guide and Sample Data: PipeTuner user guide and the sample data to run as an example, including a sample dataset for multi-camera tracking, configuration files for tuning, and scripts to launch the pipeline.

Requirements

Here is a summary of PipeTuner’s requirements:

Dataset

Users provide the typical dataset for their usecase. It should include a few sample videos, with relevant ground truth, such as bounding box, single-camera ID, multi-camera ID, annotated

Container

Users provide container(s) of the AI and/or analytics pipeline

Accuracy KPI

Users select one of the multi-object tracking KPIs: HOTA, MOTA or IDF1

Process

Overall steps are as below:

  • Download Container: Pull PipeTuner from NGC repository;

  • Download Sample Data: Download and extract sample data from NGC resource;

  • Data Preparation: Users create their own dataset with the same format as sample data, and update configuration files to match their usecase;

  • Launch Tuning: Launch the tuning pipeline using the desired configuration and data;

  • Retrieve Results: Retrieve the optimal parameters and visualize the tuning results;

  • Deploy: Deploy the optimal parameters into the desired use case.

PipeTuner searches the optimal parameters by iterating the following three steps until the accuracy KPI converges or up to the max number of iterations (i.e., epochs) specified:

  • ParamSearch: Given the accuracy KPI score in the previous iteration, make an educated guess on the set of parameters that would yield a higher accuracy KPI. For the very first iteration, a random sampling in the parameter space would be conducted;

  • PipeExec: Given the sampled/guessed parameter set, execute the pipeline with the params and generates metadata to allow accuracy evaluation;

  • PipeEval: Given the metadata outputs from the pipeline and the dataset, perform the accuracy evaluation based on the accuracy metric and generates accuracy KPI score.