Gst-nvtracker¶
This plugin allows the DS pipeline to use a low-level tracker library to track the detected objects with persistent (possibly unique) IDs over time. It supports any low-level library that implements NvDsTracker
API, including the the reference implementations provided by the NvMultiObjectTracker library: NvDCF, DeepSORT, and IOU trackers. As part of this API, the plugin queries the low-level library for capabilities and requirements concerning the input format, memory type, and batch processing support. Based on these queries, the plugin then converts the input frame buffers into the format requested by the low-level tracker library. For example, the NvDCF and DeepSORT trackers use NV12 or RGBA, while IOU requires no video frame buffers at all.
The capabilities of a low-level tracker library also include support for batch processing across multiple input streams. Batch processing is typically more efficient than processing each stream independently, especially when the GPU-based acceleration is performed by the low-level library. If a low-level library supports batch processing, it would be the mode of operation selected by the plugin; however, this preference can be overridden with enable-batch-process
configuration option if the low-level library supports both batch and per-stream modes.
The low-level capabilities also include support for passing the past-frame data, which includes the object tracking data generated in the past frames but not reported as output yet. This can be the case when the low-level tracker stores the object tracking data generated in the past frames only internally because of, say, low tracking confidence, but later decided to report due to, say, increased confidence. If the past-frame data is retrieved from the low-level tracker, it would be reported as a user-meta, called NvDsPastFrameObjBatch
. This can be enabled by the enable-past-frame
configuration option.
The plugin accepts NV12- or RGBA-formatted frame data from the upstream component and scales (and/or converts) the input buffer to a buffer in the tracker plugin based on the format required by the low-level library, with the frame resolution specified by tracker-width
and tracker-height
in the configuration file’s [tracker]
section. The path to the low-level tracker library is to be specified via ll-lib-file
configuration option in the same section. The low-level library to be used may also require its own configuration file, which can be specified via ll-config-file
option. If ll-config-file
is not specified, the low-level tracker library may proceed with its default parameter values. The reference low-level tracker implementations provied by the NvMultiObjectTracker
library support different tracking algorithms:
NvDCF: The NvDCF tracker is an NVIDIA®-adapted Discriminative Correlation Filter (DCF) tracker that uses a correlation filter-based online discriminative learning algorithm for visual object tracking capability, while using a data association algorithm and a state estimator for multi-object tracking.
DeepSORT: The DeepSORT tracker is a re-implementation of the official DeepSORT tracker, which uses the deep cosine metric learning with a Re-ID neural network. This implementation allows users to use any Re-ID network as long as it is supported by NVIDIA’s TensorRT™ framework.
IOU Tracker: The Intersection-Over-Union (IOU) tracker uses the IOU values among the detector’s bounding boxes between the two consecutive frames to perform the association between them or assign a new target ID if no match found. This tracker includes a logic to handle false positives and false negatives from the object detector; however, this can be considered as the bare-minimum object tracker, which may serve as a baseline only.
More details on each algorithm and its implementation details can be found in NvMultiObjectTracker : A Reference Low-Level Tracker Library section.
Inputs and Outputs¶
This section summarizes the inputs, outputs, and communication facilities of the Gst-nvtracker plugin.
Input
Gst Buffer (as a frame batch from available source streams)
NvDsBatchMeta
More details about NvDsBatchMeta can be found in the link. The color formats supported for the input video frame by the NvTracker plugin are NV12 and RGBA.
Output
Gst Buffer (provided as an input)
NvDsBatchMeta
(with addition of tracked object coordinates, tracker confidence and object IDs inNvDsObjectMeta
)
Note
If the tracker algorithm does not generate confidence value, then tracker confidence value will be set to the default value (i.e., 1.0
) for tracked objects. For IOU and DeepSORT trackers, tracker_confidence
is set to 1.0
as these algorithms do not generate confidence values for tracked objects. NvDCF tracker, on the other hand, generates confidence for the tracked objects due to its visual tracking capability, and its value is set in tracker_confidence
field in NvDsObjectMeta
structure.
Note that there are separate parameters in NvDsObjectMeta
for detector’s confidence and tracker’s confidence, which are confidence
and tracker_confidence
, respectively. More details can be found in New metadata fields
The following table summarizes the features of the plugin.
Feature |
Description |
Release |
---|---|---|
Configurable tracker width/height |
Frames are internally scaled in NvTracker plugin to the specified resolution for tracking and passed to the low-level lib |
DS 2.0 |
Multi-stream CPU/GPU tracker |
Supports tracking on batched buffers consisting of frames from multiple sources |
DS 2.0 |
NV12 Input |
— |
DS 2.0 |
RGBA Input |
— |
DS 3.0 |
Configurable GPU device |
User can select GPU for internal scaling/color format conversions and tracking |
DS 2.0 |
Dynamic addition/deletion of sources at runtime |
Supports tracking on new sources added at runtime and cleanup of resources when sources are removed |
DS 3.0 |
Support for user’s choice of low-level library |
Dynamically loads user selected low-level library |
DS 4.0 |
Support for batch processing |
Supports sending frames from multiple input streams to the low-level library as a batch if the low-level library advertises capability to handle that |
DS 4.0 |
Support for multiple buffer formats as input to low-level library |
Converts input buffer to formats requested by the low-level library, for up to 4 formats per frame |
DS 4.0 |
Support for reporting past-frame data |
Supports reporting past-frame data if the low-level library supports the capability |
DS 5.0 |
Support for enabling tracking-id display |
Supports enabling or disabling display of tracking-id |
DS 5.0 |
Support for tracking ID reset based on event |
Based on the pipeline event (i.e., GST_NVEVENT_STREAM_EOS and GST_NVEVENT_STREAM_RESET), the tracking IDs on a particular stream can be reset to 0 or new IDs. |
DS 6.0 |
Gst Properties¶
The following table describes the Gst properties of the Gst-nvtracker plugin.
Property |
Meaning |
Type and Range |
Example Notes |
---|---|---|---|
tracker-width |
Frame width at which the tracker is to operate, in pixels. |
Integer, 0 to 4,294,967,295 |
tracker-width=640 (to be a multiple of 32) |
tracker-height |
Frame height at which the tracker is to operate, in pixels. |
Integer, 0 to 4,294,967,295 |
tracker-height=384 (to be a multiple of 32) |
ll-lib-file |
Pathname of the low-level tracker library to be loaded by Gst-nvtracker. |
String |
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so |
ll-config-file |
Configuration file for the low-level library if needed. |
Path to configuration file |
ll-config-file=config_tracker_NvDCF_perf.yml |
gpu-id |
ID of the GPU on which device/unified memory is to be allocated, and with which buffer copy/scaling is to be done. (dGPU only.) |
Integer, 0 to 4,294,967,295 |
gpu-id=0 |
enable-batch-process |
Enables/disables batch processing mode. Only effective if the low-level library supports both batch and per-stream processing. (Optional) (default value is 1) |
Boolean |
enable-batch-process=1 |
enable-past-frame |
Enables/disables reporting past-frame data mode. Only effective if the low-level library supports it. (Optional) (default value is 0) |
Boolean |
enable-past-frame=1 |
tracking-surface-type |
Set surface stream type for tracking. (default value is 0) |
Integer, ≥0 |
tracking-surface-type=0 |
display-tracking-id |
Enables tracking ID display on OSD. |
Boolean |
display-tracking-id=1 |
compute-hw |
Compute engine to use for scaling. 0 - Default 1 - GPU 2 - VIC (Jetson only) |
Integer, 0 to 2 |
compute-hw=1 |
tracking-id-reset-mode |
Allow force-reset of tracking ID based on pipeline event. Once tracking ID reset is enabled and such event happens, the lower 32-bit of the tracking ID will be reset to 0 0: Not reset tracking ID when stream reset or EOS event happens 1: Terminate all existing trackers and assign new IDs for a stream when the stream reset happens (i.e., GST_NVEVENT_STREAM_RESET) 2: Let tracking ID start from 0 after receiving EOS event (i.e., GST_NVEVENT_STREAM_EOS) (Note: Only the lower 32-bit of tracking ID to start from 0) 3: Enable both option 1 and 2 |
Integer, 0 to 3 |
tracking-id-reset-mode=0 |
NvDsTracker API for Low-Level Tracker Library¶
A low-level tracker library can be implemented using the API defined in sources/includes/nvdstracker.h
. Parts of the API refer to sources/includes/nvbufsurface.h
. The names of API functions and data structures are prefixed with NvMOT
, which stands for NVIDIA Multi-Object Tracker. Below is the general flow of the API from a low-level library’s perspective:
The first required function is:
NvMOTStatus NvMOT_Query ( uint16_t customConfigFilePathSize, char* pCustomConfigFilePath, NvMOTQuery *pQuery );
The plugin uses this function to query the low-level library’s capabilities and requirements before it starts any processing sessions (i.e., contexts) with the library. Queried properties include the input frame’s color format (e.g., RGBA or NV12), memory type (e.g., NVIDIA® CUDA® device or CPU-mapped NVMM), and support for batch processing.
The plugin performs this query once during initialization stage, and its results are applied to all contexts established with the low-level library. If a low-level library configuration file is specified, it is provided in the query for the library to consult. The query reply structure,
NvMOTQuery
, contains the following fields:NvMOTCompute computeConfig
: Report compute targets supported by the library. The plugin currently only echoes the reported value when initiating a context.uint8_t numTransforms
: The number of color formats required by the low-level library. The valid range for this field is0
toNVMOT_MAX_TRANSFORMS
. Set this to0
if the library does not require any visual data.Note
0
does not mean that untransformed data will be passed to the library.NvBufSurfaceColorFormat colorFormats[NVMOT_MAX_TRANSFORMS]
: The list of color formats required by the low-level library. Only the firstnumTransforms
entries are valid.NvBufSurfaceMemType memType
: Memory type for the transform buffers. The plugin allocates buffers of this type to store color- and scale-converted frames, and the buffers are passed to the low-level library for each frame.The support is currently limited to the following types:
dGPU:
NVBUF_MEM_CUDA_PINNED NVBUF_MEM_CUDA_UNIFIED
Jetson:
NVBUF_MEM_SURFACE_ARRAY
bool supportBatchProcessing
: True if the low-level library supports the batch processing across multiple streams; otherwise false.bool supportPastFrame
: True if the low-level library supports outputting the past-frame data; otherwise false.
After the query, and before any frames arrive, the plugin must initialize a context with the low-level library by calling:
NvMOTStatus NvMOT_Init ( NvMOTConfig *pConfigIn, NvMOTContextHandle *pContextHandle, NvMOTConfigResponse *pConfigResponse );
The context handle is opaque outside the low-level library. In the batch processing mode, the plugin requests a single context for all input streams. In per-stream processing mode, on the other hand, the plugin makes this call for each input stream so that each stream has its own context. This call includes a configuration request for the context. The low-level library has an opportunity to:
Review the configuration and create a context only if the request is accepted. If any part of the configuration request is rejected, no context is created, and the return status must be set to
NvMOTStatus_Error
. ThepConfigResponse
field can optionally contain status for specific configuration items.Pre-allocate resources based on the configuration.
Note
In the
NvMOTMiscConfig
structure, thelogMsg
field is currently unsupported and uninitialized.The
customConfigFilePath
pointer is only valid during the call.
Once a context is initialized, the plugin sends frame data along with detected object bounding boxes to the low-level library whenever it receives such data from upstream. It always presents the data as a batch of frames, although the batch can contain only a single frame in per-stream processing contexts. Note that depending on the frame arrival timings to the tracker plugin, the composition of frame batches could either be a full batch (that contains a frame from every stream) or a partial batch (that contains a frame from only a subset of the streams). In either case, each batch is guaranteed to contain at most one frame from each stream.
The function call for this processing is:
NvMOTStatus NvMOT_Process ( NvMOTContextHandle contextHandle, NvMOTProcessParams *pParams, NvMOTTrackedObjBatch *pTrackedObjectsBatch );, where:
pParams
is a pointer to the input batch of frames to process. The structure contains a list of one or more frames, with at most one frame from each stream. Thus, no two frame entries have the samestreamID
. Each entry of frame data contains a list of one or more buffers in the color formats required by the low-level library, as well as a list of object attribute data for the frame. Most libraries require at most one-color format.
pTrackedObjectsBatch
is a pointer to the output batch of object attribute data. It is pre-populated with a value fornumFilled
, which is the same as the number of frames included in the input parameters.If a frame has no output object attribute data, it is still counted in
numFilled
and is represented with an empty list entry (NvMOTTrackedObjList
). An empty list entry has the correctstreamID
set and numFilled set to0
.Note
The output object attribute data
NvMOTTrackedObj
contains a pointer to the detector object (provied in the input) that is associated with a tracked object, which is stored inassociatedObjectIn
. You must set this to the associated input object only for the frame where the input object is passed in. For a pipeline with PGIEinterval=1
, for example:
Frame 0:
NvMOTObjToTrack
X
is passed in. The tracker assigns it ID 1, and the output object’sassociatedObjectIn
points toX
.Frame 1: Inference is skipped, so there is no input object from detector to be associated with. The tracker finds Object 1, and the output object’s
associatedObjectIn
points toNULL
.Frame 2:
NvMOTObjToTrack
Y
is passed in. The tracker identifies it as Object 1. The output Object 1 hasassociatedObjectIn
pointing toY
.
Depending on the capability of the low-level tracker, there could be some tracked object data generated in the past frames but stored only internally without being reported due to, say, a low confidence in the past frames, while it is still being tracked in the background. If it becomes more confident in the later frames and ready to report them, then those past-frame data can be retrieved from the tracker plug-in using the following function call. The past-frame data can be retrieved from the low-level library and outputted to
batch_user_meta_list
inNvDsBatchMeta
as a user-meta:NvMOTStatus NvMOT_ProcessPast ( NvMOTContextHandle contextHandle, NvMOTProcessParams *pParams, NvDsPastFrameObjBatch *pPastFrameObjBatch );
where:
pParams
is a pointer to the input batch of frames to process. This structure is needed to check the list of stream ID in the batch.
pPastFrameObjBatch
is a pointer to the output batch of object attribute data generated in the past frames. The data structureNvDsPastFrameObjBatch
is defined ininclude/nvds_tracker_meta.h
. It may include a set of tracking data for each stream in the input. For each object, there could be multiple past-frame data if the tracking data is stored for multiple frames for the object.
In case that a video stream source is removed on the fly, the plugin calls the following function so that the low-level tracker library can remove it as well. Note that this API is optional and valid only when the batch processing mode is enabled, meaning that it will be executed only when the low-level tracker library has an actual implementation for the API. If called, the low-level tracker library can release any per-stream resource that it may be allocated:
void NvMOT_RemoveStreams ( NvMOTContextHandle contextHandle, NvMOTStreamId streamIdMask );
When all processing is complete, the plugin calls this function to clean up the context and deallocate its resources:
void NvMOT_DeInit (NvMOTContextHandle contextHandle);
NvMultiObjectTracker : A Reference Low-Level Tracker Library¶
Multi-object tracking (MOT) is a key building block for a large number of intelligent video analytics (IVA) applications where analyzing the temporal changes of objects’ states is required. Given a set of detected objects from the Primary GIE (PGIE) module on a single or multiple streams and with the APIs defined to work with the tracker plugin, the low-level tracker library is expected to carry out actual multi-object tracking operations to keep persistent IDs to the same objects over time.
DeepStream SDK (from v6.0) provides a single reference low-level tracker library, called NvMultiObjectTracker, that implements all three low-level tracking algorithms (i.e., IOU, NvDCF, and DeepSORT) in a unified architecture. It supports multi-stream, multi-object tracking in the batch processing mode for efficient processing on both CPU and GPU. The following sections will cover the unified tracker architecture and the details of each reference tracker implementation.
Unified Tracker Architecture for Composable Multi-Object Tracker¶
Different multi-object trackers share common modules when it comes to basic functionalities (e.g., data association, target management, and state estimation), while differing in other core functionalities (e.g., visual tracking for NvDCF and deep association metric for DeepSORT). The NvMultiObjectTracker low-level tracker library employs a unified architecture to allow the composition of an object tracker through configuration by enabling only the modules required for a particular object tracker. The IOU tracker, for example, requires a minimum set of modules that consist of data association and target management modules. The NvDCF tracker, on the other hand, requires a DCF-based visual tracking module, a state estimator module, and a trajectory management module in addition to the modules in the IOU tracker. Instead of the visual tracking module, the DeepSORT tracker requires an Re-ID based deep association metric for data association module.
The table below summarizes what modules are used to compose each object tracker, showing what modules are shared across different object trackers and how each object tracker differs in composition:
Tracker Type |
Visual Tracker |
State Estimator |
Target Management |
Trajectory Management |
Data Association Metric |
||
---|---|---|---|---|---|---|---|
Proximity & Size |
Visual Similarity |
Re-ID |
|||||
IOU |
O |
O |
O |
||||
NvDCF |
O |
O |
O |
O |
O |
O |
|
DeepSORT |
O |
O |
O |
O |
O |
By enabling the required modules in a config file, each object tracker can be composed due to the unified architecture. In the following sections, we will first see the general work flow of the NvMultiObjectTracker library and its core modules, and then each type of object trackers in more details with explanations on the config params in each module.
Work Flow and Core Modules in The NvMultiObjectTracker Library¶
The input to a low-level tracker library consists of (1) a batch of video frames from a single or multiple streams and (2) a list of detector objects for each video frame. If the detection interval (i.e., interval
in Primary GIE section) is set larger than 0, the input data to the low-level tracker would have the detector object data only when the inferencing for object detection is performed for a video frame batch (i.e., the inferenced frame batch). For the frame batches where the inference is skipped (i.e., the uninferenced frame batch), the input data would include only the video frames.
Note
A detector object refers to an object that is detected by the detector in PGIE module, which is provided to the multi-object tracker module as an input.
A target refers to an object that is being tracked by the object tracker.
An inferenced frame is a video frame where an inference is carried out for object detection. Since the inference interval can be configured in setting for PGIE and can be larger than zero, the
frameNum
of two consecutive inferenced frames may not be contiguous.
For carrying out multi-object tracking operations with the given input data, below are the essential functionalities to be performed:
Data association between the detector objects from a new video frame and the existing targets for the same video stream
Target management based on the data association results, including the target state update and the creation and termination of targets
Depending on the type of trackers, there could be some addition processing to be performed before the data association. The NvDCF tracker, for example, would perform the visual tracker-based localization so that the localization results of the targets for the new video frame can be used for the data association. The DeepSORT tracker, on the other hand, would extract the Re-ID features from all the detector object bboxes for data association. More details will be covered in the respective sections for each type of trackers.
Data Association¶
For data association, various types of similarity metrics are used to calculate the matching score between the detector objects and the existing targets, including:
Location similarity (i.e., proximity)
Bounding box size similarity
Visual appearance similarity (specific to NvDCF tracker)
Re-ID feature similarity (specific to DeepSORT tracker)
For the proximity between two objects/targets, the intersection-over-union (IOU) is a typical metric that is widely used, but it also depends on the size similarity between them. The similarity of the bbox size between two objects can be used explicitly, which is calculated as the ratio of the size of the smaller bbox over the larger one.
The total score for association for a pair of objects/targets is calculated as the weighted sum of all the metrics, whose weights are configured by matchingScoreWeight4Iou
for IOU score, matchingScoreWeight4SizeSimilarity
for size similarity, and matchingScoreWeight4VisualSimilarity
for the visual similarity, all under the DataAssociator
section in the low-level tracker config file. In addition to the weights for those metrics, users can also set a minimum threshold for them by configuring minMatchingScore4Iou
, minMatchingScore4SizeSimilarity
, and minMatchingScore4VisualSimilarity
for IOU, the size similarity, and the visual similarity, respectively. The minimum threshold for the overall matching score can also be set by minMatchingScore4Overall
. Regarding the matching algorithm, users can employ an efficient greedy algorithm or a Hungarian-like algorithm for optimal bipartite matching by setting associationMatcherType
.
During the matching, a detector object is associated/matched with a target that belongs to the same class by default to minimize the false matching. However, this can be disabled by setting checkClassMatch: 0
, allowing objects can be associated regardless of their object class IDs. This can be useful when employing a detector like YOLO, which can detect many classes of objects, where there could be false classification on the same object over time.
The output of the data association module consists of three sets of objects/targets:
The unmatched detector objects
The matched pairs of the detector objects and the existing targets
The unmatched targets
The unmatched detector objects are among the objects detected by a PGIE detector, yet not associated with any of the existing targets. An unmatched detector object is considered as a newly observed object that needs to be tracked, unless they are determined to be duplicates to any of the existing target. If the maximum IOU score of a new detector object to any of the existing targets is lower than minIouDiff4NewTarget
, a new target tracker would be created to track the object since it is not a duplicate to an existing target.
Target Management and Error Handling¶
Although a new object is detected by the detector (i.e., a detector object), there is a possibility that this may be a false positive. To suppress such noise in detection, the NvMultiObjectTracker tracker library employs a technique called Late Activation, where a newly detected object is examined for a period of time and activated for long-term tracking only if it survives such a period. To be more specific, whenever a new object is detected, a new tracker is created to track the object, but the target is initially put into the Tentative mode, which is a probationary period, whose length is defined by probationAge
under TargetManagement
section of the config file. During this probationary period, the tracker output will not be reported to the downstream, since the target is not validated yet; however, those unreported tracker output data (i.e., the past-frame data) are stored within the low-level tracker for later report.
Note
To allow the low-level tracker library to store and report the past-frame data, user would need to set
enable-past-frame=1
andenable-batch-process=1
under[tracker]
section in the deepstream-app config file. Note that the past-frame data is only supported in the batch processing mode.
The same target may be detected for the next frame; however, there could be false negative by the detector (i.e., missed detection), resulting in a unsuccessful data association to the target. The NvMultiObjectTracker library employs another technique called Shadow Tracking, where a target is still being tracked in the background for a period of time even when the target is not associated with a detector object. Whenever a target is not associated with a detector object for a given time frame, an internal variable of the target called shadowTrackingAge is incremented. Once the target is associated with a detector object, then shadowTrackingAge will be reset to zero.
If the target is in the Tentative mode and the shadowTrackingAge reaches earlyTerminationAge
specified in the config file, the target will be terminated prematurely (which is referred to as Early Termination). If the target is not terminated during the Tentative mode and successfully assocciated with a detector object, the target is activated and put into the Active mode, starting to report the tracker outputs to the downstream. If the past-frame data is enabled, the tracked data during the Tentative mode will be reported as well, since they were not reported yet. Once a target is activated (i.e., in Active mode), if the target is not associated for a given time frame (or the tracker confidence gets lower than a threshold), it will be put into the Inactive mode, and its shadowTrackingAge will be incremented, yet still be tracked in the background. However, the target will be terminated if the shadowTrackingAge exceeds maxShadowTrackingAge
.
The state transitions of a target tracker are summarized in the following diagram:
The NvMultiObjectTracker library can generate a unique ID to some extent. If enabled by setting useUniqueID: 1
, each video stream will be assigned a 32-bit long random number during the initialization stage. All the targets created from the same video stream will have the same upper 32-bit of the uint64_t
-type target ID set by the per-stream random number. In the meantime, the lower 32-bit of the target ID starts from 0. The randomly generated upper 32-bit number allows the target IDs from a particular video stream to increment from a random position in the possible ID space. If disabled (i.e., useUniqueID: 0
, which is the default value), both the upper and lower 32-bit will start from 0, resulting in the target ID to be incremented from 0 for every run.
Note that the incrementation of the lower 32-bit of the target ID is done across the whole video streams in the same NvMultiObjectTracker library instantiation. Thus, even if the unique ID generation is disabled, the tracker IDs will be unique for the same pipeline run. If the unique ID generation is disabled, and if there are three objects for Stream 1 and two objects for Stream 2, for example, the target IDs will be assigned from 0 to 4 (instead of 0 to 2 for Stream 1 and 0 to 1 for Stream 2) as long as the two streams are being processed by the same library instantiation.
The NvMultiObjectTracker library pre-allocates all the GPU memories during initialization based on:
The number of streams to be processed
The maximum number of objects to be tracked per stream (denoted as
maxTargetsPerStream
)
Thus, the CPU/GPU memory usage by the NvMultiObjectTracker library is almost linearly proportional to the total number of objects being tracked, which is (number of video streams) × (maxTargetsPerStream), except the scratch memory space used by dependent libraries (such as cuFFT™, TensorRT™, etc.). Thanks to the pre-allocation of all the necessary memory, the NvMultiObjectTracker library is not expected to have memory growth during long-term run even when the number of objects increases over time.
Once the number of objects being tracked reaches the configured maximum value (i.e., maxTargetsPerStream
), any new objects will be discarded until some of the existing targets are terminated. Note that the number of objects being tracked includes the targets that are being tracked in the shadow tracking mode. Therefore, NVIDIA recommends that users set maxTargetsPerStream
large enough to accommodate the maximum number of objects of interest that may appear in a frame, as well as the objects that may have been tracked from the past frames in the shadow tracking mode.
The minDetectorConfidence
property under BaseConfig
section in a low-level tracker config file sets the confidence level below which the detector objects are filtered out.
State Estimation¶
The NvMultiObjectTracker library employs two types of state estimators, both of which are based on Kalman Filter (KF): Simple KF and Regular KF. The Simple KF has 6
states defined, which are {x, y, w, h, dx, dy}
, where x
and y
indicate the coordinates of the top-left corner of a target bbox, while w
and h
the width and the height of the bbox, respectively. dx
and dy
denote the velocity of x
and y
states. The Regular KF, on the other hand, have 8
states defined, which are {x, y, w, h, dx, dy, dw, dh}
, where dw
and dh
are the velocity of w
and h
states and the rest is the same as the Simple KF. Both types of Kalman Filters employ a constant velocity model for generic use. The measurement vector is defined as {x, y, w, h}
. Furthermore, there is an option to use bbox aspect ratio a
and its velocity da
instead of w
and dw
when useAspectRatio
is enabled, which is specially used by DeepSORT. In case the state estimator is used for a generic use case (like in the NvDCF tracker), the process noise variance for {x, y}
, {w, h}
, and {dx, dy, dw, dh}
can be configured by processNoiseVar4Loc
, processNoiseVar4Size
, and processNoiseVar4Vel
, respectively.
When a visual tracker module is enabled (like in the NvDCF tracker), there could be two different measurements from the state estimator’s point of view: (1) the bbox from the detector at PGIE and (2) the bbox from the tracker’s localization. This is because the NvDCF tracker module is capable of localizing targets using its own learned filter. The measurement noise variance for these two different types of measurements can be configured by measurementNoiseVar4Detector
and measurementNoiseVar4Tracker
. These parameters are expected to be tuned or optimized based on the detector’s and the tracker’s characteristics for better measurement fusion.
The usage of the state estimator in the DeepSORT tracker slightly differs from that for the aforementioned generic use case in that it is basically a Regular KF, yet with a couple of differences as per the original paper and the implementation (Check the references in DeepSORT Tracker (Alpha) section):
Use of the aspect ratio
a
and the heighth
(instead ofw
andh
) to estimate the bbox sizeThe process and measurement noises that are proportional to the bounding box height (instead of constant values)
To allow these differences, the state estimator module in the NvMultiObjectTracker library has a set of additional config parameters:
useAspectRatio
to enable the use ofa
(instead ofw
)noiseWeightVar4Loc
andnoiseWeightVar4Vel
as the proportion coefficients for the measurement and velocity noise, respectively
Note that if these two parameters are set, the fixed process noise and measurement noise parameters for the generic use cases will be ignored.
Motion-based Target Re-Association¶
In DeepStream SDK 6.0, an experimental feature is introduced, which is called motion-based target re-association. This is to address a common problem that occurs in the situation where objects undergo partial occlusions to full occlusions in a gradual manner. During this course of action, the detector at PGIE module may capture only some part of the objects (due to partial visibility), resulting in ill-sized, ill-centered bboxes on the target. This may result in larger errors in target state estimation, further causing potentially significant errors in target state prediction. If this happens, when the objects are recovered from the partial or full occlusion, it would be likely that the tracker cannot be associated with the object that appeared again due to the size and location prediction errors, resulting in tracking failure and ID switch. Such a re-association problem can typically be handled as a post-processing; however, for real-time analytics applications, this is often expected to be handled seamlessly as a part of the real-time multi-object tracking.
This newly-introduced target re-association technique takes advantage of the Late Activation and Shadow Tracking features in the NvMultiObjectTracker library to realize the seamless real-time target re-association by the following steps:
Tracklet Prediction: Whenever an existing target is not matched (associated) with a detection bbox for a prolonged period (same as probationAge
), it is considered that the target is lost. While the visual tracker module keeps track of the target in the shadow tracking mode, a length of the predicted tracklet (configured by trajectoryProjectionLength
) is generated using some of the recently matched tracklet points (whose length is set by prepLength4TrajectoryProjection
) and stored into an internal DB until it is matched again with a detection bbox or re-associated with another target.
Target ID Acquisition: When a new target is instantiated, its validity is examined for a few frames (i.e., probationAge
) and a target ID is assigned only if validated (i.e., Late Activation), after which the target state report starts. During the target ID acquisition, the new target is examined to see if it matches with one of the predicted tracklets from the existing targets in the internal DB where the aforementioned predicted tracklets are stored. If matched, it would mean that the new target is actually the re-appearance of an existing target that disappeared in the past. THen, the new target is associated with the existing target and its tracklet is fused into that as well. Otherwise, a new target ID is assigned.
Tracklet Matching: During the tracklet matching process in the previous step, the valid candidate tracklets are queried from the DB based on the feasible time window. After that, the tracklet similarities are computed using, say, a Dynamic Time Warping (DTW)-like algorithm based on the average IOU along the tracklet with various criteria including the minimum average IOU score (i.e., minTrackletMatchingScore
), maximum angular difference in motion (i.e., maxAngle4TrackletMatching
), minimum speed similarity (i.e., minSpeedSimilarity4TrackletMatching
), and minimum bbox size similarity (i.e., minBboxSizeSimilarity4TrackletMatching
). To limit the search space in time, the max time gap in frames can configured by maxTrackletMatchingTimeSearchRange
.
Tracklet Fusion: Once two tracklets are associated, they are fused together to generate one smooth tracklet based on the matching status with detector and the confidence at each point.
Below is a sample configuration to be added to Trajectory Management module to enable this feature:
TrajectoryManagement: useUniqueID: 1 # Use 64-bit long Unique ID when assignining tracker ID. Default is [true] enableReAssoc: 1 # Enable Re-Assoc # [Re-Assoc: Motion-based] minTrajectoryLength4Projection: 20 # min trajectory length required to make projected trajectory prepLength4TrajectoryProjection: 10 # the length of the trajectory during which the state estimator is updated to make projections trajectoryProjectionLength: 90 # the length of the projected trajectory # [Re-Assoc: Trajectory Similarity] minTrackletMatchingScore: 0.5 # min tracklet similarity score for matching in terms of average IOU between tracklets maxAngle4TrackletMatching: 30 # max angle difference for tracklet matching [degree] minSpeedSimilarity4TrackletMatching: 0.2 # min speed similarity for tracklet matching minBboxSizeSimilarity4TrackletMatching: 0.6 # min bbox size similarity for tracklet matching maxTrackletMatchingTimeSearchRange: 20 # the search space in time for max tracklet similarity
Note that motion-based target re-association can be effective only when the state estimator is enabled, otherwise the tracklet prediction will not be made properly.
Bounding-box Unclipping¶
Another small experimental feature is the bounding box unclipping. If a target is fully visible within the field-of-view (FOV) of the camera but starts going out of the FOV, the target would be partially visible and the bounding box (i.e., bbox) may capture only a part of the target (i.e., clipped by the FOV) until it fully exits the scene. If it is expected that the size of the bbox doesn’t change much around the border of the video frame, the full bbox can be estimated beyond the FOV limit using the bbox size estimated when the target was fully visible. This feature can be enabled by setting enableBboxUnClipping: 1
under TargetManagement
module in the low-level config file.
Configuration Parameters¶
The following table summarizes the configuration parameters for the common modules in the NvMultiObjectTracker low-level tracker library.
Module |
Property |
Meaning |
Type and Range |
Default value |
---|---|---|---|---|
Base Config |
minDetectorConfidence |
Minimum detector confidence for a valid object |
Float, -inf to inf |
minDetectorConfidence: 0.0 |
Target Management |
maxTargetsPerStream |
Max number of targets to track per stream |
Integer, 0 to 65535 |
maxTargetsPerStream: 30 |
minIouDiff4NewTarget |
Min IOU to existing targets for discarding new target |
Float, 0 to 1 |
minIouDiff4NewTarget: 0.5 |
|
enableBboxUnClipping |
Enable bounding-box unclipping |
Boolean |
enableBboxUnClipping: 0 |
|
probationAge |
Length of probationary period in #of frames |
Integer, ≥0 |
probationAge: 5 |
|
maxShadowTrackingAge |
Maximum length of shadow tracking |
Integer, ≥0 |
maxShadowTrackingAge: 38 |
|
earlyTerminationAge |
Early termination age |
Integer, ≥0 |
earlyTerminationAge: 2 |
|
Trajectory Management |
useUniqueID |
Enable unique ID generation scheme |
Boolean |
useUniqueID: 0 |
enableReAssoc |
Enable motion-based target re-association |
Boolean |
enableReAssoc: 0 |
|
minTrajectoryLength4Projection |
Min tracklet length of a target (i.e., age) to perform trajectory projection [frames] |
Integer, >=0 |
minTrajectoryLength4Projection: 20 |
|
prepLength4TrajectoryProjection |
Length of the trajectory during which the state estimator is updated to make projections [frames] |
Integer, >=0 |
prepLength4TrajectoryProjection: 10 |
|
trajectoryProjectionLength |
Length of the projected trajectory [frames] |
Integer, >=0 |
trajectoryProjectionLength: 90 |
|
minTrackletMatchingScore |
Min tracklet similarity score for matching in terms of average IOU between tracklets |
Float, 0.0 to 1.0 |
minTrackletMatchingScore: 0.4 |
|
maxAngle4TrackletMatching |
Max angle difference for tracklet matching [degree] |
Integer, [0, 180] |
maxAngle4TrackletMatching: 40 |
|
minSpeedSimilarity4TrackletMatching |
Min speed similarity for tracklet matching |
Float, 0.0 to 1.0 |
minSpeedSimilarity4TrackletMatching: 0.3 |
|
minBboxSizeSimilarity4TrackletMatching |
Min bbox size similarity for tracklet matching |
Float, 0.0 to 1.0 |
minBboxSizeSimilarity4TrackletMatching: 0.6 |
|
maxTrackletMatchingTimeSearchRange |
Search space in time for max tracklet similarity |
Integer, >=0 |
maxTrackletMatchingTimeSearchRange: 20 |
|
Data Associator |
associationMatcherType |
Type of matching algorithm { GREEDY=0, GLOBAL=1 } |
Integer, [0, 1] |
associationMatcherType: 0 |
checkClassMatch |
Enable associating only the same-class objects |
Boolean |
||
minMatchingScore4Overall |
Min total score for valid matching |
Float, 0.0 to 1.0 |
minMatchingScore4Overall: 0.0 |
|
minMatchingScore4SizeSimilarity |
Min bbox size similarity score for valid matching |
Float, 0.0 to 1.0 |
minMatchingScore4SizeSimilarity: 0.0 |
|
minMatchingScore4Iou |
Min IOU score for valid matching |
Float, 0.0 to 1.0 |
minMatchingScore4Iou: 0.0 |
|
minMatchingScore4VisualSimilarity |
Min visual similarity score for valid matching |
Float, 0.0 to 1.0 |
minMatchingScore4VisualSimilarity: 0.0 |
|
matchingScoreWeight4SizeSimilarity |
Weight for size similarity term in matching cost function |
Float, 0.0 to 1.0 |
matchingScoreWeight4SizeSimilarity: 0.0 |
|
matchingScoreWeight4Iou |
Weight for IOU term in matching cost function |
Float, 0.0 to 1.0 |
matchingScoreWeight4Iou: 1.0 |
|
matchingScoreWeight4VisualSimilarity |
Weight for visual similarity term in matching cost function |
Float, 0.0 to 1.0 |
matchingScoreWeight4VisualSimilarity: 0.0 |
|
State Estimator |
stateEstimatorType |
Type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 } |
Integer, [0,2] |
stateEstimatorType: 0 |
processNoiseVar4Loc |
Process noise variance for bbox center |
Float, 0.0 to inf |
processNoiseVar4Loc: 2.0 |
|
processNoiseVar4Size |
Process noise variance for bbox size |
Float, 0.0 to inf |
processNoiseVar4Size: 1.0 |
|
processNoiseVar4Vel |
Process noise variance for velocity |
Float, 0.0 to inf |
processNoiseVar4Vel: 0.1 |
|
measurementNoiseVar4Detector |
Measurement noise variance for detector’s detection |
Float, 0.0 to inf |
measurementNoiseVar4Detector: 4.0 |
|
measurementNoiseVar4Tracker |
Measurement noise variance for tracker’s localization |
Float, 0.0 to inf |
measurementNoiseVar4Tracker: 16.0 |
|
noiseWeightVar4Loc |
Noise covariance weight for bbox location; if set, location noise will be proportional to box height |
Float, >0.0 considered as set |
noiseWeightVar4Loc: -0.1 |
|
noiseWeightVar4Vel |
Noise covariance weight for bbox velocity; if set, location noise will be proportional to box height |
Float, >0.0 considered as set |
noiseWeightVar4Vel: -0.1 |
|
useAspectRatio |
Use aspect ratio in Kalman Filter’s states |
Boolean |
useAspectRatio: 0 |
More details on how to tune these parameters with some samples can be found in NvMultiObjectTracker Parameter Tuning Guide.
IOU Tracker¶
The NvMultiObjectTracker library provides an object tracker that has only the essential and minimum set of functionalities for multi-object tracking, which is called the IOU tracker. The IOU tracker performs only the following functionalities:
Data association between the detector objects from a new video frame and the existing targets for the video frame
Target management based on the data association results including the target state update and the creation and termination of targets
The error handling mechanisms like Late Activation and Shadow Tracking are integral part of the target management module of the NvMultiObjectTracker library; thus, such features are inherently enabled in the IOU tracker.
The IOU tracker can be used as a performance baseline as it consumes the minimum amount of computational resources. A sample configuration file is provided as a part of DeepStream SDK package, which is named as config_tracker_IOU.yml
.
NvDCF Tracker¶
The NvDCF tracker employs a visual tracker that is based on the discriminative correlation filter (DCF) for learning a target-specific correlation filter and for localizing the same target in the next frames using the learned correlation filter. Such correlation filter learning and localization are usually carried out on per-object basis in a typical MOT implementation, creating a potentially large number of small CUDA kernel launches when processed on GPU. This inherently poses challenges in maximizing GPU utilization, especially when a large number of objects from multiple video streams are expected to be tracked on a single GPU.
To address such performance issues, the GPU-accelerated operations for the NvDCF tracker are designed to be executed in the batch processing mode to maximize the GPU utilization despite the nature of small CUDA kernels in per-object tracking model. The batch processing mode is applied in the entire tracking operations, including the bbox cropping and scaling, visual feature extraction, correlation filter learning, and localization. This can be viewed as a similar model to the batched cuFFT or batched cuBLAS calls, but it differs in that the batched MOT execution model spans many operations in a higher level. The batch processing capability is extended from multi-object batching to the batching of multiple streams for even greater efficiency and scalability.
Thanks to its visual tracking capability, the NvDCF tracker can localize and keep track of the targets even when the detector in PGIE misses them (i.e., false negatives) for potentially an extended period of time caused by partial or full occlusions, resulting in more robust tracking. The enhanced robustness characteristics allow users to use a higher maxShadowTrackingAge
value for longer-term object tracking and also allows PGIE’s interval
to be higher only at the cost of slight degradation in accuracy.
In addition to the visual tracker module, the NvDCF tracker employs a Kalman filter-based state estimator to better estimate and predict the states of the targets.
Visual Tracking¶
For each tracked target, NvDCF tracker defines a search region around its predicted location in the next frame large enough for the same target to be detected in the search region. The location of a target on a new video frame is predicted by using the state estimator module. The searchRegionPaddingScale
property determines the size of the search region as a multiple of the diagonal of the target’s bounding box. The size of the search region would be determined as:
\[ \begin{align}\begin{aligned}SearchRegion_{width}=w+searchRegionPaddingScale*\sqrt{w*h}\\SearchRegion_{height}=h+searchRegionPaddingScale*\sqrt{w*h}\end{aligned}\end{align} \]
, where \(w\) and \(h\) are the width and height of the target’s bounding box, respectively.
Once the search region is defined for each target at its predicted location, the image patches from each of the search regions are cropped and scaled to a predefined feature image size, from which the visual features are extracted. The featureImgSizeLevel
property defines the size of the feature image, and its range is from 1 to 5. Each level between 1 and 5 corresponds to 12x12, 18x18, 24x24, 36x36, and 48x48, respectively, for each feature channel. A lower value of featureImgSizeLevel
causes NvDCF to use a smaller feature size, increasing GPU performance potentially yet at the cost of accuracy and robustness. Consider the relationship between featureImgSizeLevel
and searchRegionPaddingScale
when configuring the parameters. If searchRegionPaddingScale
is increased while featureImgSizeLevel
is fixed, the number of pixels corresponding to the target itself in the feature images will be effectively decreased.
For each cropped image patch, the visual appearance features such as ColorNames and/or Histogram-of-Oriented-Gradient (HOG) are extracted. The type of visual features to be used can be configured by setting useColorNames
and/or useHog
. The HOG features consist of 18 channels based on the number of bins for different orientations, while The ColorNames features have 10 channels. If both features are used (by setting useColorNames: 1
and useHog: 1
), the total number of channels would then be 28. Therefore, if one uses both HOG and ColorNames with featureImgSizeLevel: 5
, the dimension of visual features that represents a target would be 28x48x48. The more channels of visual features are used, the higher the accuracy would be, but would increase the computational complexity and reduce the performance. The NvDCF tracker uses NVIDIA’s VPI™ library for extracting those visual features.
The correlation filters are generated with an attention window (using a Hanning window) applied at the center of the target bbox. Users are allowed to move the center of the attention window in the vertical direction. For example, featureFocusOffsetFactor_y: -0.2
would result in the center of the attention window to be at y=-0.2
in the feature map, where the relative range of the height is [-0.5, 0.5]
. Consider that typical surveillance or CCTV cameras are mounted at a moderately high position to monitor a wide area of the environment, say, a retail store or a traffic intersection. From those vantage points, more occlusions can occur at the lower part of the body of persons or vehicles by other persons or vehicles. Moving the attention window up a bit may improve the accuracy and robustness for those use cases.
Once a correlation filter is generated for a target, typical DCF-based trackers usually employ an exponential moving average for temporal consistency when the optimal correlation filter is created and updated over consecutive frames. The learning rate for this moving average can be configured by filterLr
and filterChannelWeightsLr
for the correlation filters and their channel weights, respectively. The standard deviation for Gaussian for the desired response used when creating an optimal DCF filter can also be configured by gaussianSigma
.
Data Association¶
The association of target IDs across frames for robust tracking typically entails visual appearance-based similarity matching, for which the visual appearance features are extracted at each candidate location. Usually, this is a computationally expensive process and often plays as a performance bottleneck in object tracking. Unlike existing approaches that extract visual features from all the candidate locations and perform feature matching among all the candidate objects, the NvDCF tracker takes advantage of the correlation response (that is already obtained during target localization stage) as the tracking confidence map of each tracker over a search region and simply looks up the confidence values at each candidate location (i.e., the location of each detector object) to get the visual similarity without any explicit computation. By comparing those confidences between trackers, we can identify which tracker has a higher visual similarity to a particular detector object and use it as a part of the matching score for data association. Therefore, the visual similarity matching in the data association process can be carried out very efficiently through a simple look-up table (LUT) operation on existing correlation responses.
In the animated figure below, the left side shows the target within its search region, while the right side shows the correlation response map (where the deep red color indicates higher confidence and deep blue indicates lower confidence). In the confidence map, the yellow cross (i.e., +
) around the center indicates the peak location of the correlation response, while the purple x
indicate the center of nearby detector bboxes. The correlation response values at those purple x
locations indicate the confidence score on how likely the same target exists at that location in terms of the visual similarity.
If there are multiple detector bboxes (i.e., purple x
) around the target like the one in the figure below, the data association module will take care of the matching based on the visual similairty score and the configured weight and minimum value, which are matchingScoreWeight4VisualSimilarity
and minMatchingScore4VisualSimilarity
, respectively.
Visualization of Sample Outputs and Correlation Responses¶
This section presents the visualization of some sample outputs and internal states (such as correlation responses for a few selected targets) to help users to better understand how the NvDCF tracker works, especially on the visual tracker module.
PeopleNet + NvDCF¶
PeopleNet is one of the pre-trained models that users can download from NVIDIA NGC catalog. For the output visualization, a deepstream-app
pipeline is first constructed with the following components:
Detector: PeopleNet (w/ ResNet-34 as backbone)
Post-processing algorithm for object detection: Hybrid clustering (i.e., DBSCAN + NMS)
Tracker: NvDCF with
config_tracker_NvDCF_accuracy.yml
configuration
For better visualization, the following changes were also made:
featureImgSizeLevel: 5
is set underVisualTracker
section inconfig_tracker_NvDCF_accuracy.yml
tracker-height=960
andtracker-width=544
under[tracker]
section in the deepstream-app config file
More details on config files used for the aforementioned pipeline are below:
config_infer_primary_PeopleNet.txt
[property]
## model-specific params like paths to model, engine, label files, etc. are to be added by users
gpu-id=0
net-scale-factor=0.0039215697906911373
input-dims=3;544;960;0
uff-input-blob-name=input_1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=3
maintain-aspect-ratio=1
[class-attrs-all]
pre-cluster-threshold=0.3
post-cluster-threshold=0.1
nms-iou-threshold=0.6
minBoxes=3
dbscan-min-score=1.1
eps=0.1
detected-min-w=20
detected-min-h=20
The resulting output video of the aforementioned pipeline with (PeopleNet + Hybrid clustering + NvDCF) is shown below, but please note that only ‘Person’-class objects are detected and shown in the video:
While the video above shows the per-stream output, each animated figure below shows (1) the cropped & scaled image patch used for each target on the left side and (2) the corresponding correlation response map for the target on the right side. As mentioned earlier, the yellow +
mark shows the peak location of the correlation response map generated by using the learned correlation filter, while the puple x
marks show the the center of nearby detector objects.
Person 1 (w/ Blue hat + gray backpack) |
Person 6 (w/ Red jacket + gray backpack) |
Person 4 (w/ Green jacket) |
Person 5 (w/ Cyan jacket) |
The figures above show how the correlation responses progress over time for the cases of no occlusion, partial occlusion, and full occlusions happening. It can be seen that even when a target undergoes a full occlusion for a prolonged period, the NvDCF tracker is able to keep track of the targets in many cases.
If featureImgSizeLevel: 3
is used instead for better performance, the resolution of the image patch used for each target would get lower like shown in the figure below.
Person 1 (w/ Blue hat + gray backpack) |
Person 6 (w/ Red jacket + gray backpack) |
DetectNet_v2 + NvDCF¶
DetectNet_v2 is one of the pre-trained models that users can download from NVIDIA NGC catalog, and also the one with ResNet-10 as backbone is packaged as a part of DeepStream SDK release as well. It can detect both Person and Car as well as Bicycle and Road sign.
For the output visualization, a deepstream-app
pipeline is first constructed with the following components:
Detector: DetectNet_v2 (w/ ResNet-10 as backbone)
Post-processing algorithm for object detection: Non-Maximum Suppression (NMS)
Tracker: NvDCF with
config_tracker_NvDCF_accuracy.yml
configuration
For better visualization, the following changes were also made:
featureImgSizeLevel: 5
is set underVisualTracker
section inconfig_tracker_NvDCF_accuracy.yml
tracker-height=960
andtracker-width=544
under[tracker]
section in the deepstream-app config file
More details on config files used for the aforementioned pipeline are below:
config_infer_primary_DetectNet_v2.txt
[property]
## model-specific params like paths to model, engine, label files, etc. are to be added by users
gpu-id=0
net-scale-factor=0.0039215697906911373
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
force-implicit-batch-dim=1
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
[class-attrs-all]
topk=25
nms-iou-threshold=0.2
pre-cluster-threshold=0.2
Note that the neural net model used for this pipeline is much lighter than the PeopleNet used in the previous section, because ResNet-10 is used as the backbone of the DetectNet_v2 model for this pipeline. The resulting output video of the aforementioned pipeline with (DetectNet_v2 + NMS + NvDCF) is shown below:
While the video above shows the per-stream output, each animated figure below shows (1) the cropped & scaled image patch used for each target on the left side and (2) the corresponding correlation response map for the target on the right side. Again, the yellow +
mark shows the peak location of the correlation response map generated by using the learned correlation filter, while the puple x
marks show the the center of nearby detector objects.
Car 40 |
Car 6 |
Car 54 |
Car 224 |
Even when a target undergoes a full occlusion for a prolonged period or significant visual appearance changes over time due to the changing orientation of targets, the NvDCF tracker is able to keep track of the targets in many cases.
DetectNet_v2 (w/ interval=2) + NvDCF¶
The enhanced robustness in NvDCF tracker allows the users to have a detection interval higher than 0
to improve the performance with minimum cost on the accuracy. This section presents a sample output from a pipeline with a PGIE module that is configured with interval=2
, meaning that the inference for object detection takes place at every third frame. The sample deepstream-app
pipeline is constructed with the following configuration:
Detector: DetectNet_v2 (w/ ResNet-10 as backbone) (w/
interval=2
)Post-processing algorithm for object detection: Non-Maximum Suppression (NMS)
Tracker: NvDCF with
config_tracker_NvDCF_accuracy.yml
configuration
Below is the sample output of the pipeline:
Note that with interval=2
, the computational load for the inferencing for object detection is only a third compared to that with interval=0
, dramatically improving the overall pipeline performance. If an accurate and robust object tracker is used, the accuracy of the overall pipeline wouldn’t be degraded too much, potentially yielding a well-balanced tradeoff between performance and accuracy.
Configuration Parameters¶
A few sample configuration files for the NvDCF tracker are provided as a part of DeepStream SDK package, which is named as:
config_tracker_NvDCF_max_perf.yml
config_tracker_NvDCF_perf.yml
config_tracker_NvDCF_accuracy.yml
The first max_perf config file is to configure the NvDCF tracker to consume the least amount of resources, while the second perf config file is for the use case where a decent balance between performance and accuracy is required. The last accuracy config file is to maximize the accuracy and robustness by enabling most of the features to their full capability.
The following table summarizes the configuration parameters used in the config files for the NvDCF low-level tracker (except the common modules and parameters already mentioned in an earlier section).
Module |
Property |
Meaning |
Type and Range |
Default value |
---|---|---|---|---|
Visual Tracker |
visualTrackerType |
Type of visual tracker among { DUMMY=0, NvDCF=1 } |
Int, [0, 1] |
visualTrackerType: 0 |
useColorNames |
Use ColorNames feature |
Boolean |
useColorNames: 1 |
|
useHog |
Use Histogram-of-Oriented-Gradient (HOG) feature |
Boolean |
useHog: 0 |
|
featureImgSizeLevel |
Size of a feature image |
Integer, 1 to 5 |
featureImgSizeLevel: 2 |
|
featureFocusOffsetFactor_y |
The offset for the center of hanning window relative to the feature height |
Float, -0.5 to 0.5 |
featureFocusOffsetFactor_y: 0.0 |
|
filterLr |
Learning rate for DCF filter in exponential moving average |
Float, 0.0 to 1.0 |
filterLr: 0.075 |
|
filterChannelWeightsLr |
Learning rate for weights for different feature channels in DCF |
Float, 0.0 to 1.0 |
filterChannelWeightsLr: 0.1 |
|
gaussianSigma |
Standard deviation for Gaussian for desired response |
Float, >0.0 |
gaussianSigma: 0.75 |
|
Target Management |
searchRegionPaddingScale |
Search region size |
Integer, 1 to 3 |
searchRegionPaddingScale: 1 |
minTrackerConfidence |
Minimum detector confidence for a valid target |
Float, 0.0 to 1.0 |
minTrackerConfidence: 0.6 |
|
Data Assoicator |
minMatchingScore4VisualSimilarity |
Min visual similarity score for valid matching |
Float, 0.0 to 1.0 |
minMatchingScore4VisualSimilarity: 0.0 |
matchingScoreWeight4VisualSimilarity |
Weight for visual similarity term in matching cost function |
Float, 0.0 to 1.0 |
matchingScoreWeight4VisualSimilarity: 0.0 |
To learn more about NvDCF Parameter tuning guide, see NvMultiObjectTracker Parameter Tuning Guide.
See also the Troubleshooting in NvDCF Parameter Tuning section for solutions to common problems in tracker behavior and tuning.
DeepSORT Tracker (Alpha)¶
The DeepSORT tracker utilizes deep learning based object appearance information for accurate object matching in different frames and locations, resulting in enhanced robustness over occlusions and reduced ID switches. It applies a pre-trained Re-ID (re-identification) neural network to extract a feature vector for each object, compares the similarity between different objects using the extracted feature vector with a cosine distance metric, and combines it with a state estimator to perform the data association over frames. Users can follow instructions in Setup Official Re-ID Model for a quick hands-on. Check Customize Re-ID Model for more information on working with a custom Re-ID model for object tracking with different architectures and datasets.
Note
The DeepSORT tracker implementation in DeepStream SDK 6.0 is in Alpha-quality, so users are to take that into consideration.
Re-ID¶
For Re-ID, the detector objects provided as inputs are first cropped and resized according to the input size of the Re-ID model used. The parameter keepAspc
controls whether the object’s aspect ratio is preserved after cropping. Then a pre-trained convolutional neural network model is used to process the objects in batches and outputs a fixed-dimension vector with L2 norm equal to 1 for each detector object as the Re-ID feature. NVIDIA TensorRT™ is used to genrate an engine from the network for the Re-ID inference. For each target tracker, a gallery of its most recent Re-ID features are kept internally. The size of the gallery can be set by reidHistorySize
.
Re-ID Similarity Score¶
For each detector object and each target, a float-``type value in range ``[0.0, 1.0]
is computed as the Re-ID similarity score using the cosine metric. Specifically, the dot product between the Re-ID feature of the detector object and each Re-ID feature in the tracker’s gallery is computed. Among them, the maximum of all the dot products will be determined to be the similarity score. The score between the i-th detector object and the j-th target is
\(score_{ij}=\max_{k}(feature\_det_{i}\cdot feature\_track_{jk})\)
, where:
\(\cdot\) denotes the dot product.
\(feature\_det_{i}\) denotes the detector object’s feature.
\(feature\_track_{jk}\) denotes the k-th Re-ID feature in the tracker’s gallery. \(k\) =[1,
reidHistorySize
].
A detector object and a target can be matched only if the score is larger than a threshold set in minMatchingScore4Overall
.
Setup Official Re-ID Model¶
The official Re-ID model is a 10-layer ResNet trained on the MARS dataset. A script and README file to setup the model are provided in sources/tracker_DeepSORT
for the convenience of the users. The link to the pre-trained Re-ID model can be found in the Installation section in the official DeepSORT GitHub. Once the model is found, users are advised to do the following:
Download the Re-ID model
networks/mars-small128.pb
and place it undersources/tracker_DeepSORT
.Make sure TensorRT’s
uff-converter-tf
andgraphsurgeon-tf
are installed. Then installtensorflow-gpu
(version 1.15 recommended) for python3.Run provided script to remove nodes not supported by TensorRT and convert TensorFlow model into UFF format by
$ python3 convert.py mars-small128.pb
.Use the provided low-level config file for DeepSORT (i.e.,
config_tracker_DeepSORT.yml
) in gst-nvtracker plugin, and changeuffFile
to match UFF model path.
The official model can directly run at FP32 or FP16 precision set by networkMode
. To run the model at INT8 mode, users would need to create a calibration table on their own and specify its path in calibrationTableFile
. Please refer to INT8 Inference Using Custom Calibration in TensorRT documentation for more information.
Data Association¶
For the data association in the DeepSORT tracker, there are two metrics are used:
Proximity
Re-ID based visual similarity
For the proximity score, the Mahalanobis distance between a target and a detector object is calculated using the target’s predicted location and its associated uncertainty. More specifically, the Mahalanobis distance for the i-th detector object and the j-th target is calculated as:
\(dist_{ij}=(D_i-Y_j)^TS_j^{-1}(D_i-Y_j)\)
where:
\(D_i\) denotes the i-th detected bbox in
{x, y, a, h}
format.\(Y_j\) denotes the predicted states
{x', y', a', h'}
from state estimator for the j-th tracker.\(S_j\) denotes the predicted covariance from state estimator for the j-th tracker.
Based on official DeepSORT, the threshold of Mahalanobis distance is at a 95% confidence interval computed from the inverse Chi-square distribution. That means the maximum Mahalanobis distance for any detected object and tracker to be matched is greater than 9.4877
. This threshold is set by thresholdMahalanobis
.
The Re-ID based visual similarity score is computed based on the cosine distance of the Re-ID feature vectors between a detector object and a target.
For each target, a set of candidate detector objects are identified and filtered using both metrics in order to minimize the computational cost for matching process. Given the identified candidate set for each target, a greedy algorithm can be used to find the best matches based on the Re-ID similarity scores.
Customize Re-ID Model¶
Apart from the Re-ID model provided in the official DeepSORT repository, the provided DeepSORT implementation allows users to use a custom Re-ID model of their choice as long as it is in the UFF format and the output of the network for each object is a single vector with unit L2 norm. Then the Re-ID similarity score will be computed based on the cosine metric and used to perform the data association in the same way as the official model. The steps are:
Train a Re-ID network using deep learning frameworks such as TensorFlow or PyTorch.
Make sure the network layers are supported by TensorRT and convert the model into UFF format. Mixed precision inference is still supported, and a calibration cache is required for INT8 mode.
Specify the following parameters in the low-level tracker config file based on the properties of the custom model. Then run DeepStream SDK with the new Re-ID model.
reidFeatureSize
reidHistorySize
inferDims
colorFormat
networkMode
offsets
netScaleFactor
inputBlobName
outputBlobName
uffFile
modelEngineFile
A sample config file for the DeepSORT tracker is provided as a part of the DeepStream SDK package, which is config_tracker_DeepSORT.yml
.
Configuration Parameters¶
The following table summarizes the configuration parameters for DeepSORT low-level tracker.
Module |
Property |
Meaning |
Type and Range |
Default value |
---|---|---|---|---|
Re-ID |
reidType |
The type of Re-ID network among { DUMMY=0, DEEP=1 } |
Integer, [0, 1] |
reidType: 0 |
batchSize |
Batch size of Re-ID network |
Integer, >0 |
batchSize: 1 |
|
workspaceSize |
Workspace size to be used by Re-ID TensorRT engine, in MB |
Integer, >0 |
workspaceSize: 20 |
|
reidFeatureSize |
Size of Re-ID feature |
Integer, >0 |
reidFeatureSize: 128 |
|
reidHistorySize |
Size of feature gallery, i.e. max number of Re-ID features kept for one tracker |
Integer, >0 |
reidHistorySize: 100 |
|
inferDims |
Re-ID network input dimension CHW or HWC based on inputOrder |
Integer, >0 |
inferDims: [128, 64, 3] |
|
colorFormat |
Re-ID network input color format among {RGB=0, BGR=1 } |
Integer, [0, 1] |
colorFormat: 0 |
|
networkMode |
Re-ID network inference precision mode among {FP32=0, FP16=1, INT8=2 } |
Integer, [0, 1, 2] |
networkMode: 0 |
|
offsets |
Array of values to be subtracted from each input channel, with length equal to number of channels |
Comma delimited float array |
offsets: [0.0, 0.0, 0.0] |
|
netScaleFactor |
Scaling factor for Re-ID network input after substracting offsets |
Float, >0 |
netScaleFactor: 1.0 |
|
inputBlobName |
Re-Id network input layer name |
String |
inputBlobName: “images” |
|
outputBlobName |
Re-Id network output layer name |
String |
outputBlobName: “features” |
|
uffFile |
Absolute path to Re-ID network uff model |
String |
uffFile:”” |
|
modelEngineFile |
Absolute path to Re-ID engine file |
String |
modelEngineFile:”” |
|
calibrationTableFile |
Absolute path to calibration table, required by INT8 only |
String |
calibrationTableFile:”” |
|
keepAspc |
Whether to keep aspcect ratio when resizing input objects to Re-ID network |
Boolean |
keepAspc: 1 |
|
Data Associator |
thresholdMahalanobis |
Max Mahalanobis distance based on Chi-square probabilities |
Float, >0 considered as set |
thresholdMahalanobis: -1.0 |
minMatchingScore4Overall |
Min total score, in DeepSORT only the Re-ID similarity score as the total score |
Float, 0.0 to 1.0 |
minMatchingScore4Overall: 0.0 |
Implementation Details and Reference¶
The difference between DeepSORT’s implementation in the reference NvMultiObjectTracker library and the official implementation includes:
For data association, the official implementation sorts the objects in an ascending order based on the tracking age and runs the matching algorithm once for objects at each age, while the DeepSORT implementation in NvMultiObjectTracker library applies a greedy matching algorithm to all the objects with bounding box size and class checks to achieve a better performance-accuracy tradeoff.
The DeepSORT implementation in the NvMultiObjectTracker library adopts the same target management policy as the NvDCF tracker, which is advanced to the official DeepSORT.
The cosine distance metric for two features is \(score_{ij}=1-feature\_det_{i}\cdot feature\_track_{jk}\), where smaller values indicate more similarity. By contrast, in the NvMultiObjectTracker library, the dot product is directly used for computational efficiency, so larger values means higher similarity.
Reference: Wojke, Nicolai, Alex Bewley, and Dietrich Paulus. “Simple online and real-time tracking with a deep association metric.” 2017 IEEE international conference on image processing (ICIP). IEEE, 2017. Check Paper and Official implementation on Github.
Low-Level Tracker Comparisons and Tradeoffs¶
DeepStream SDK provides three reference low-level tracker libraries which have different resource requirements and performance characteristics, in terms of accuracy, robustness, and efficiency, allowing the users to choose the best tracker based on their use cases and requirements. See the following table for comparison.
Tracker Type |
GPU Compute |
CPU Compute |
Pros |
Cons |
Best Use Cases |
---|---|---|---|---|---|
IOU |
X |
Very Low |
|
|
|
NvDCF |
Medium |
Low |
|
|
|
DeepSORT |
High |
Low |
|
|
|
How to Implement a Custom Low-Level Tracker Library¶
To write a custom low-level tracker library, users are expected to implement the API defined in sources/includes/nvdstracker.h
, which is covered in an earlier section on NvDsTracker API , and parts of the API refer to sources/includes/nvbufsurface.h
. Thus, the users would need to include nvdstracker.h
to implement the API:
#include "nvdstracker.h"
Below is a sample implementation of each API. First of all, the low-level tracker library needs to implement the query function from the plugin like below:
NvMOTStatus NvMOT_Query(uint16_t customConfigFilePathSize, char* pCustomConfigFilePath, NvMOTQuery *pQuery) { /** * Users can parse the low-level config file in pCustomConfigFilePath to check * the low-level tracker's requirements */ pQuery->computeConfig = NVMOTCOMP_GPU; // among {NVMOTCOMP_GPU, NVMOTCOMP_CPU} pQuery->numTransforms = 1; // 0 for IOU tracker, 1 for NvDCF or DeepSORT tracker as they require the video frames pQuery->colorFormats[0] = NVBUF_COLOR_FORMAT_NV12; // among {NVBUF_COLOR_FORMAT_NV12, NVBUF_COLOR_FORMAT_RGBA} // among {NVBUF_MEM_DEFAULT, NVBUF_MEM_CUDA_DEVICE, NVBUF_MEM_CUDA_UNIFIED, NVBUF_MEM_CUDA_PINNED, ... } #ifdef __aarch64__ pQuery->memType = NVBUF_MEM_DEFAULT; #else pQuery->memType = NVBUF_MEM_CUDA_DEVICE; #endif pQuery->batchMode = NvMOTBatchMode_Batch; // set NvMOTBatchMode_Batch if the low-level tracker supports batch processing mode. Otherwise, NvMOTBatchMode_NonBatch pQuery->supportPastFrame = true; // set true if the low-level tracker supports the past-frame data or not /** * return NvMOTStatus_Error if something is wrong * return NvMOTStatus_OK if everything went well */ }
Assuming that the low-level tracker library defines and implements a custom class (e.g., NvMOTContext
class in the sample code below) to perform actual operations corresponding to each API call. Below is a sample code for initialization and de-initialization APIs:
Note
The sample code below have some skeletons only. Users are expected to add proper error handling and additional codes as needed
NvMOTStatus NvMOT_Init(NvMOTConfig *pConfigIn, NvMOTContextHandle *pContextHandle, NvMOTConfigResponse *pConfigResponse) { if(pContextHandle != nullptr) { NvMOT_DeInit(*pContextHandle); } /// User-defined class for the context NvMOTContext *pContext = nullptr; /// Instantiate the user-defined context pContext = new NvMOTContext(*pConfigIn, *pConfigResponse); /// Pass the pointer as the context handle *pContextHandle = pContext; /** * return NvMOTStatus_Error if something is wrong * return NvMOTStatus_OK if everything went well */ } /** * This is a sample code for the constructor of `NvMOTContext` * to show what may need to happen when NvMOTContext is instantiated in the above code for `NvMOT_Init` API */ NvMOTContext::NvMOTContext(const NvMOTConfig &config, NvMOTConfigResponse& configResponse) { // Set CUDA device as needed cudaSetDevice(m_Config.miscConfig.gpuId) // Instantiate an appropriate localizer/tracker implementation // Load and parse the config file for the low-level tracker using the path to a config file m_pLocalizer = LocalizerFactory::getInstance().makeLocalizer(config.customConfigFilePath); // Set max # of streams to be supported // ex) uint32_t maxStreams = config.maxStreams; // Use the video frame info for(uint i=0; i<m_Config.numTransforms; i++) { // Use the expected color format from the input source images NvBufSurfaceColorFormat configColorFormat = (NvBufSurfaceColorFormat)m_Config.perTransformBatchConfig[i].colorFormat; // Use the frame width, height, and pitch as needed uint32_t frameHeight = m_Config.perTransformBatchConfig[i].maxHeight; uint32_t frameWidth = m_Config.perTransformBatchConfig[i].maxWidth; uint32_t framePitch = m_Config.perTransformBatchConfig[i].maxPitch; /* Add here to pass the frame info to the low-level tracker */ } // Set if everything goes well configResponse.summaryStatus = NvMOTConfigStatus_OK; }void NvMOT_DeInit(NvMOTContextHandle contextHandle) { /// Destroy the context handle delete contextHandle; }
During the initialization stage (when NvMOT_Init()
is called), the context for the low-level tracker is expected to be instantiated, and its pointer is passed as the context handle (i.e., pContextHandle
) as the output as well as the output status in pConfigResponse
. Users may allocate memories based on the information about the video frames (e.g., width, height, pitch, and colorFormat) and streams (e.g., max # of streams) from the input NvMOTConfig *pConfigIn
, where the definition of the struct NvMOTConfig
can be found in nvdstracker.h
. The path to the config file for the low-level tracker library in pConfigIn->customConfigFilePath
can be also used to parse the config file to initialize the low-level tracker library.
Once the low-level tracker library creates the tracker context during the initialization stage, it needs to implement a function to process each frame batch, which is NvMOT_Process()
. Make sure to set the stream ID properly in the output so that pParams->frameList[i].streamID
matches with pTrackedObjectsBatch->list[j].streamID
if they are for the same stream, regardless of i
and j
. The method NvMOTContext::processFrame()
in the sample code below is expected to perform the required multi-object tracking operations with the input data of the video frames and the detector object information, while reporting the tracking outputs in NvMOTTrackedObjBatch *pTrackedObjectsBatch
.
Users can refer to Accessing NvBufSurface memory in OpenCV to know more about how to access the pixel data in the video frames.
NvMOTStatus NvMOT_Process(NvMOTContextHandle contextHandle, NvMOTProcessParams *pParams, NvMOTTrackedObjBatch *pTrackedObjectsBatch) { /// Process the given video frame using the user-defined method in the context, and generate outputs contextHandle->processFrame(pParams, pTrackedObjectsBatch); /** * return NvMOTStatus_Error if something is wrong * return NvMOTStatus_OK if everything went well */ } /** * This is a sample code for the method of `NvMOTContext::processFrame()` * to show what may need to happen when it is called in the above code for `NvMOT_Process` API */ NvMOTStatus NvMOTContext::processFrame(const NvMOTProcessParams *params, NvMOTTrackedObjBatch *pTrackedObjectsBatch) { // Make sure the input frame is valid according to the MOT Config used to create this context for(uint streamInd=0; streamInd<params->numFrames; streamInd++) { NvMOTFrame *motFrame = ¶ms->frameList[streamInd]; for(uint i=0; i<motFrame->numBuffers; i++) { /* Add something here to check the validity of the input using the following info*/ motFrame->bufferList[i]->width motFrame->bufferList[i]->height motFrame->bufferList[i]->pitch motFrame->bufferList[i]->colorFormat } } // Construct the mot input frames std::map<NvMOTStreamId, NvMOTFrame*> nvFramesInBatch; for(NvMOTStreamId streamInd=0; streamInd<params->numFrames; streamInd++) { NvMOTFrame *motFrame = ¶ms->frameList[streamInd]; nvFramesInBatch[motFrame->streamID] = motFrame; } if(nvFramesInBatch.size() > 0) { // Perform update and construct the output data inside m_pLocalizer->update(nvFramesInBatch, pTrackedObjectsBatch); /** * The call m_pLocalizer->update() is expected to properly populate the ouput (i.e., `pTrackedObjectsBatch`). * * One thing to not forget is to fill `pTrackedObjectsBatch->list[i].list[j].associatedObjectIn`, where * `i` and `j` are indices for stream and targets in the list, respectively. * If the `j`th target was associated/matched with a detector object, * then `associatedObjectIn` is supposed to have the pointer to the associated detector object. * Otherwise, `associatedObjectIn` shall be set NULL. */ } }
In case the low-level tracker has a capability of storing the past-frame data, it can be retrieved to the tracker plugin by using the NvMOT_ProcessPast()
API call.
NvMOTStatus NvMOT_ProcessPast(NvMOTContextHandle contextHandle, NvMOTProcessParams *pParams, NvDsPastFrameObjBatch *pPastFrameObjBatch) { /// Retrieve the past-frame data if there are contextHandle->processFramePast(pParams, pPastFrameObjBatch); /** * return NvMOTStatus_Error if something is wrong * return NvMOTStatus_OK if everything went well */ } /** * This is a sample code for the method of `NvMOTContext::processFramePast()` * to show what may need to happen when it is called in the above code for `NvMOT_ProcessPast` API */ NvMOTStatus NvMOTContext::processFramePast(const NvMOTProcessParams *params, NvDsPastFrameObjBatch *pPastFrameObjBatch) { std::set<NvMOTStreamId> videoStreamIdList; ///\ Indiate what streams we want to fetch past-frame data for(NvMOTStreamId streamInd=0; streamInd<params->numFrames; streamInd++) { videoStreamIdList.insert(params->frameList[streamInd].streamID); } m_pLocalizer->outputPastFrameObjs(videoStreamIdList, pPastFrameObjBatch); }
For the cases where the video stream sources are dynamically removed and added, the API call NvMOT_RemoveStreams()
can be implemented to clean-up the resources no longer needed.
NvMOTStatus NvMOT_RemoveStreams(NvMOTContextHandle contextHandle, NvMOTStreamId streamIdMask) { /// Remove the specified video stream from the low-level tracker context contextHandle->removeStream(streamIdMask); /** * return NvMOTStatus_Error if something is wrong * return NvMOTStatus_OK if everything went well */ } /** * This is a sample code for the method of `NvMOTContext::removeStream()` * to show what may need to happen when it is called in the above code for `NvMOT_RemoveStreams` API */ NvMOTStatus NvMOTContext::removeStream(const NvMOTStreamId streamIdMask) { m_pLocalizer->deleteRemovedStreamTrackers(streamIdMask); }
In sum, to work with the NvDsTracker APIs, users may want to define class NvMOTContext
like below to implement the methods in the code above. The actual implementation of each method may differ depending on the tracking algorithm the user choose to implement.
/** * @brief Context for input video streams * * The stream context holds all necessary state to perform multi-object tracking * within the stream. * */ class NvMOTContext { public: NvMOTContext(const NvMOTConfig &configIn, NvMOTConfigResponse& configResponse); ~NvMOTContext(); /** * @brief Process a batch of frames * * Internal implementation of NvMOT_Process() * * @param [in] pParam Pointer to parameters for the frame to be processed * @param [out] pTrackedObjectsBatch Pointer to object tracks output */ NvMOTStatus processFrame(const NvMOTProcessParams *params, NvMOTTrackedObjBatch *pTrackedObjectsBatch); /** * @brief Output the past-frame data if there are * * Internal implementation of NvMOT_ProcessPast() * * @param [in] pParam Pointer to parameters for the frame to be processed * @param [out] pPastFrameObjectsBatch Pointer to past frame object tracks output */ NvMOTStatus processFramePast(const NvMOTProcessParams *params, NvDsPastFrameObjBatch *pPastFrameObjectsBatch); /** * @brief Terminate trackers and release resources for a stream when the stream is removed * * Internal implementation of NvMOT_RemoveStreams() * * @param [in] streamIdMask removed stream ID */ NvMOTStatus removeStream(const NvMOTStreamId streamIdMask); protected: /** * Users can include an actual tracker implementation here as a member * `IMultiObjectTracker` can be assumed to an user-defined interface class */ std::shared_ptr<IMultiObjectTracker> m_pLocalizer; };