Multi-Camera Tracking UI

Note

The web UI is only provided as a reference interface on how to use the Analytics & Tracking API to query & present various data metrics & KPIs, not meant for production. Thus, hardening for production, by improving use-case specificity, usability, robustness, scalability, & security, etc., remains to be done by the users.

This web UI, developed with ReactJS framework and Networking API, provides a reference UI for the Multi-Camera Tracking application.

The following sections describe the architecture of this web UI and the user workflow.

Overview

Architecture

This visualization application involves presentation of locations/markers over a floor-map, relevant streamed videos and numerical/textual analytics data in congruence for a contextual understanding of the scenario being shown to the user. For Video Streaming, the UI application communicates with the Media Streaming module using HTTP calls and streams video content over the WebRTC channel. For all the analytics data and application metadata, the UI application queries the Web APIs server using HTTP channels.

Configurations

For more context on configuration

  • As a component of the Multi-Camera Tracking app, please refer to its Operation Parameters section.

  • As a standalone microservice, refer to the README.md in its respective directory within metropolis-apps-standalone-deployment/modules/.

If any parameter needs to be changed, UI needs to be restarted.

The parameters in the Multi-Camera Tracking UI configuration are described as follows:

{
        "uiDelaySeconds": 20, -- Signifies how much delayed the UI app is as compared to real video current time. Takes care of the duration that the backend analytics takes to process current video input and generate useful analytics data.
        "docType": "uiConfig", -- used by UI app
        "alertQueryDurationInHours": 2, -- Default range of events to query for. Example: At any point all the live events in the events/alerts panel are for past 2 hours
        "alertListLength": 5, -- Count of top events/alerts to be shown on UI
        "apiRefreshIntervalSeconds": 30, -- Interval at which query to get events/alerts is made in a looping fashion
        "defaultLiveTimeUnit": "Hours", -- Default time unit for Events Panel Time Settings
        "defaultLiveTimeValue": 2, -- Default integer value for Events Panel Time Settings
  "sensorEventsListLengthToShow": 5  -- (optional) integer value for how many sensor based event cards to show for a global mtmc event. default is 5
}

User Interactions

Application Menu

../_images/mtmc-indoor-application-menu.svg

Config Actions

Allows to add/update, view Calibration JSON config file that contains metadata for entities like buildings, rooms, cameras and other application parameters. Same as the other People Analytics application.

Upload Config
../_images/mtmc-indoor-config-actions-upload-panel.svg

Used to add following config file via POST request: calibration.json. The file is generated using Calibration Toolkit and upon the first run after a new deployment, this file is needed to be inserted in backend Database using UI for the UI app to show data.

To close the panel use the close button at top of the panel.

Get Config
../_images/mtmc-indoor-config-actions-get-panel.svg

Uses GET request to view (only) calibration config file that is being used by Application in the text area text area. And press submit button. If the query succeeds the JSON Config gets filled in the output read only text area.

Images upload

Panel to upload images from camera views or building floor plan to be used by several other relevant UI widgets in the UI app.

../_images/mtmc-indoor-camera-image-upload-panel.svg

Upload images from local system that get sent to the backend server using POST request. These images are queried by widgets dynamically. Output of the POST call shows in the OUTPUT text box.These images and the corresponding images metadata files are generated by Calibration Toolkit.

Multi-Camera Tracking Events View

A Multi-Camera Tracking event depicts a unique person per event. This view essentially shows list of such Multi-Camera Tracking events and allows the user to see sub-detailed information of these Multi-Camera Tracking events along with visualizing the event on an indoor area map in the context of the area where the cameras are located.

This view has two modes: Live and Past. The UI app starts in the Multi-Camera Tracking Live Mode View, where a list of recent Multi-Camera Tracking events is shown and updated continuously at regular intervals. The people trajectories associated with the events are also plotted and updated frequently on map view, respectively.

Each global ID has multiple local IDs associated and each local ID has its own moving trajectory. The trajectory of the last local ID from each of the global ID is plotted on the floor plan view and is color coded the same as the corresponding global ID. The start of a trajectory is drawn as a solid filled circle with the corresponding color, and the end as a white filled bigger circle.

../_images/mtmc-indoor-live-events.svg

The events in the list can be filtered using different filters in this mode. The filters include options to change the duration to check the recent events for, and/or the place within the global area where the events should belong to. Note that the time duration filter needs to be larger than the app config parameter of micro-batch interval kafkaMicroBatchIntervalMin. Otherwise, no event will be displayed.

../_images/mtmc-indoor-live-events-filter.svg

Past mode can be switched to by clicking the Live/Not-Live button. Any time the live mode can be switched back to by clicking the same button.

../_images/mtmc-indoor-not-live-events-filter.svg

In the past mode, the events filters are similar, while the difference being - in this case, the time is selected by selecting start and end times to show top events from the given time range instead of just recent events.

Upon selecting any unique person global event during the live mode, switch to not-live mode is automatically made since now the view is about the selected event.

The global event cards are shown with a thumbnail and a bounding box around the targeted person. Few other details are shown as text too on the event card. The thumbnail is chosen from the duration of the first occurrence of the person across the first sensor that the person was detected in front of. The thumbnail could have been chosen from some other occurrence too though, but the simplest implementation is currently used, in order to solve the purpose of giving visual context of the detected person using an image. Each card is assigned a color from a set of colors (can be repeated after a certain count of the event cards), and a trajectory of the first occurrence across a sensor is plotted on the map with the same color and a direction arrow.

../_images/mtmc-indoor-events-selected-global-event-1.svg ../_images/mtmc-indoor-events-selected-global-event-2.svg

Upon selecting the global event from the vertical scrollable list, a list of sub-events pertaining to per sensor based detection of the person across multiple sensors is shown as a horizontal scrollable list. The selected global event card is shown in darker gray color. Upon hovering the card the map is plotted with its pertaining trajectory. Upon hovering any of the sub-event card, the map is plotted with its pertaining trajectory. The sub-event per sensor card is also shown with information text and a thumbnail with a bounding box across the targeted person chosen from the duration of the per sensor occurrence.

To view the video segment pertaining to the per sensor event, the event card can be clicked.

../_images/mtmc-indoor-events-selected-per-sensor-event.svg

After clicking a sub event card, only the selected sub event card is shown in the list area and is changed to gray color. The pertaining video is shown below the selected sub event card. As the video plays, the corresponding trajectory on the map is animated for better visual context. To replay the event video and animation, click the “Replay” button located above the video.

To again view the previous list instead of the selected global event card or the subevent per sensor card, “Click to go back” button can be clicked which is located above their respective card view area.

../_images/mtmc-indoor-events-selected-per-sensor-event-perform-qbe.svg

To perform QBE (Query-by-Example) the playing video needs to be paused and the newly appeared button below it has to be clicked. This opens the QBE widget for the last video frame at which the video was paused at.

QBE (Query-by-Example) View

Note

Comparison of various Multi-Camera Tracking approaches can be found over here

../_images/mtmc-indoor-qbe-panel.svg

This view is essentially a QBE Widget window overlaid on top of the Multi-Camera Tracking view. This widget is launched with the last paused frame of the last paused video in the Multi-Camera Tracking view. The frame image is overlaid with bounding boxes across all detected people in the frame. A bounding box can be clicked to perform QBE and the related events across different times across different sensors are shown in a newly populated list below the image. The last selected person’s bounding box border gets thicker and glows with green shadow.

../_images/mtmc-indoor-qbe-panel-filters.svg

Specific filters can be selected before making a QBE query for required results. Filters can be - target sensor and/or duration of hours. Selecting a target sensor would fetch results that pertain to the selected target sensor. Selecting ‘Hours Ago’ would restrict the results in specified time range in the recent past. The widget launches with default ‘Hours Ago’ and no target sensor (i.e. all the sensors will be considered for the query). Resetting a filter after a previous query was already made, would re-run the query with the newly selected filters options. Once, the QBE widget is launched again for a new video frame later, the filters would reset to default. The current selected filters are printed in the row below the filters icon.

../_images/mtmc-indoor-qbe-panel-selected-person.svg

Each event is shown as a card with an assigned color similar to global event cards in the Multi-Camera Tracking view. Relevant trajectories for each card is shown in the adjacent map view. The start of a trajectory is drawn as a solid filled circle with the corresponding color, and the end as a white filled bigger circle.

The “Score” information on the event card depicts how close the match is to the original selected behavior.

../_images/mtmc-indoor-qbe-panel-selected-person-selected-behavior.svg ../_images/mtmc-indoor-qbe-panel-selected-person-selected-behavior-with-video.svg

An event’s pertaining video and the trajectory animation can be shown once the event card is clicked. To replay the event video and animation, click the “Replay” button located above the video.

To go back to seeing the previous list “Click to go back” button above the selected event card can be clicked.

../_images/mtmc-indoor-qbe-panel-selected-person-selected-behavior-perform-qbe.svg

A new QBE query can be performed by pausing the video seen in this widget. The widget would then refresh for the new query.

To return to viewing the Multi-Camera Tracking Live/Past View, close the QBE widget by clicking the close button in the top right corner of the widget window.

QBE (Query-by-Example) on Video Recordings

To perform QBE directly on a video recording, instead of traversing to a Multi-Camera Tracking event video, this feature/workflow/widget can be used. A desired recorded video segment of a desired camera is loaded, the desired time/frame is reached via normal playback or seek operation, and then QBE is performed on it.

../_images/mtmc-indoor-app-content-menu-position.svg

../_images/mtmc-indoor-app-content-menu-zoomed-in.svg

../_images/mtmc-indoor-app-content-menu-options.svg

To access this workflow, start with the “QBE on Recordings” menu option on the app’s content menu at the bottom right of the UI.

../_images/mtmc-indoor-qbe-on-recordings-filters.svg

The widget/tool window opens as the topmost overlay. First, choose the desired camera. Available recorded timelines for the camera are shown as a visual slider timeline. The timeline slider loads with the most recent default selected duration (1 hour maximum). The selected duration is presented in the text below the timeline. Move the start and the end thumbs on the recordings’ timeline to set the intended duration and click “Apply” to load the video for the intended duration.

Note

  • Sliding the start and end thumbs, doesn’t right away start the video for the duration. Only clicking the “Apply” button after desired duration is set on the timeline, loads the video for the set duration on the timeline.

The durations for which no recordings were made (due to error, or manual/schedule setting), are shown as red gaps on the timeline. A video can not be loaded if the start and end thumbs are mark a duration that contains a no-recording/disabled duration.

Every time a new camera is selected, the timeline information is requested afresh from the media server.

../_images/mtmc-indoor-qbe-on-recordings-filters-with-video.svg

Once the video is loaded, using the video playback timeline thumb, seek operation can be performed (by sliding the thumb), or the video can natural left to play, to reach the desired frame to perform QBE on. To perform QBE, pause the video via its controls buttons and once the “Perform QBE” button shows up, click the button (same as how QBE is performed on an event video).

../_images/mtmc-indoor-qbe-on-recordings-minimized.svg

Clicking the button opens the QBE Tool (presented in the previous section above) and auto-minimizes the opened window. Refer to the QBE workflow (QBE (Query-by-Example) View) presented in the previous section. The minimized window maintains is last state before being minimized, so that a user can maximize it back and resume from where the user left from last. Also, the window can be manually minimized by clicking the minimize button on the widget’s title bar.

Map View Widget Settings

../_images/rtls-map-view-settings-cameras.svg

The widget includes settings where users can choose whether to display camera icons or not. Icons are only displayed if the necessary information is available in the system’s calibration file.

Note

  • Sometimes due to network issues, the video may fail to play. Trying to replay the video few times may resolve this.

  • Due to practical reasons of storage/computation, the ‘Hours Ago’ QBE filter allows up to 72 hours and won’t allow selecting a value beyond this range.