mdx.mtmc.utils.viz_rtls_utils module

class GlobalObject(config: VizConfig, global_id: int, color: Tuple[int, int, int] | None = None)

Bases: object

Global object with locations and corresponding timestamps

Parameters:
  • config (VizConfig) – visualization config

  • global_id (int) – global ID

  • color (Optional[Tuple[int,int,int]]) – color

global_object = GlobalObject(global_id, color)
activate() None

Activates the global object

Returns:

None

global_object.activate()
deactivate() None

Deactivates the global object

Returns:

None

global_object.deactivate()
update(location: List[float], timestamp: str, enable_smoothing: bool = False) None

Updates locations and timestamps of the object

Parameters:
  • location (List[float]) – location

  • timestamp (str) – timestamp

  • enable_smoothing (bool) – flag indicating whether to apply smoothing

Returns:

None

global_object.update(location, timestamp, enable_smoothing)
class GlobalObjects(config: VizConfig)

Bases: object

Global objects

Parameters:

config (VizConfig) – visualization config

global_objects = GlobalObjects(config)
get_trajectory_info() Dict[int, Dict[str, Any]]

Gets trajectory information

Returns:

trajectory information

Return type:

Dict[int, Dict[str, Any]]

map_global_id_to_trajectory_info = global_objects.get_trajectory_info()
update(locations_of_objects: Dict[str, Any], frame_id: int, timestamp: str) None

Updates global objects

Parameters:
  • locations_of_objects (Dict[str, Any]) – locations of objects

  • frame_id (int) – frame ID

  • timestamp (str) – timestamp

Returns:

None

global_objects.update(locations_of_objects, frame_id, timestamp)
class VizConfig(config: VizRtlsConfig)

Bases: object

Visualization config

Parameters:

config (VizRtlsConfig) – configuration for RTLS visualization

viz_config = VizConfig(config)
blend_overlaid_images(image_map: numpy.array, alpha: float, overlaid_images: List[numpy.array]) numpy.array

Blend overlaid images

Parameters:
  • image_map (np.array) – image of floor plan

  • alpha (float) – ratio of input image

  • overlaid_images (List[np.array]) – overlaid images

Returns:

blended image

Return type:

np.array

blended_image = blend_overlaid_images(image_map, alpha, overlaid_images)
convert_to_map_pixel(location: List[float], translation_to_global_coordinates: Dict[str, float], scale_factor: float, map_height: int) Tuple[int, int]

Converts global location to pixel location on the map

Parameters:
  • location (List[float]) – location

  • translation_to_global_coordinates (Dict[str, float]) – translation to global coordinates

  • scale_factor (float) – scale factor

  • map_height (int) – map height

Returns:

map pixel

Return type:

Tuple[int, int]

map_pixel = convert_to_map_pixel_location(location, translation_to_global_coordinates, scale_factor, map_height)
correct_fov_polygon(fov_polygon: str) str

Removes empty polygons from the string

Parameters:

fov_polygon (str) – field-of-view polygon based on WKT format

Returns:

corrected field-of-view polygon string

Return type:

str

corrected_fov_polygon = correct_fov_polygon(fov_polygon)
darken_image(image: numpy.array, alpha: float = 0.2) numpy.array

Darkens an image

Parameters:
  • image (np.array) – image

  • alpha (float) – ratio of original image

Returns:

darkened image

Return type:

np.array

darkened_image = darken_image(image, alpha)
pad_image(image: numpy.array, boundary_width: int = 3, color: Tuple[int, int, int] = (255, 255, 255)) numpy.array

Pads an image

Parameters:
  • image (np.array) – image

  • boundary_width (int) – boundary width

  • color (Tuple[int,int,int]) – color

Returns:

padded image

Return type:

np.array

padded_image = pad_image(image, boundary_width, color)
plot_amr_icon(image_map: numpy.array, center: Tuple[int, int]) numpy.array

Plots “+” shape icon representing an AMR on image

Parameters:
  • image_map (np.array) – image of floor plan

  • center (Tuple[int]) – center location of AMR icon

Returns:

plotted image

Return type:

np.array

plotted_image = plot_amr_icon(image_map, center)
plot_combined_image(config: VizConfig, image_map: numpy.array, map_video_name_to_capture: Dict[str, cv2.VideoCapture], global_people: GlobalObjects, global_amrs: GlobalObjects, data_dict: Dict[int, Dict[str, Dict[str, Any]]], rtls_log: List[str], frame_ids: List[int], amr_log: List[str], amr_frame_ids: List[int], frame_id: int, read_frame_only: bool = False) numpy.array

Plots combined image

Parameters:
  • config (VizConfig) – visualization config

  • image_map (np.array) – image of floor plan

  • map_video_name_to_capture (Dict[str,cv2.VideoCapture]) – map from video names to video captures

  • global_people (GlobalObjects) – global person objects

  • global_amrs (GlobalObjects) – global AMR objects

  • data_dict (Dict[int,Dict[str,Dict[str,Any]]]) – dictionary of protobuf/JSON data

  • rtls_log (List[str]) – RTLS log

  • frame_ids (List[int]) – frame IDs

  • amr_log (List[str]) – AMR log

  • amr_frame_ids (List[int]) – AMR frame IDs

  • frame_id (int) – frame ID

  • read_frame_only (bool) – flag indicating whether to ready frame only

Returns:

plotted image

Return type:

np.array

plotted_image = plot_overlaid_frame(config, image_map, map_video_name_to_capture, global_people, global_amrs,
                                    data_dict, rtls_log, frame_ids, amr_log, amr_frame_ids, frame_id, read_frame_only)
plot_fan_shape(image_map: numpy.array, location: List[float], start_angle: float, end_angle: float, radius: float, color: Tuple[int, int, int] = (242, 227, 227)) numpy.array

Plots fan shape

Parameters:
  • image_map (np.array) – image of floor plan

  • location (List[float]) – location

  • start_angle (float) – starting angle of the fan in degree

  • end_angle (float) – ending angle of the fan in degree

  • radius (float) – radius of the fan

  • color (Tuple[int,int,int]) – color

Returns:

plotted image

Return type:

np.array

plotted = plot_fan_shape(image_map, location, start_angle, end_angle, radius, color)
plot_overlaid_frame(image_frame: numpy.array, data_dict: Dict[int, Dict[str, Dict[str, Any]]], frame_id: int, sensor_id: str, padding_width: int = 3, padding_color: Tuple[int, int, int] = (255, 255, 255), enable_darkening_image: bool = False) numpy.array

Plots overlaid information on a frame image

Parameters:
  • image_frame (np.array) – image of frame

  • data_dict (Dict[int,Dict[str,Dict[str,Any]]]) – dictionary of protobuf/JSON data

  • frame_id (int) – frame ID

  • sensor_id (str) – sensor ID

  • padding_width (int) – padding width

  • padding_color (Tuple[int,int,int]) – padding color

  • enable_darkening_image (bool) – flag indicating whether to darken image

Returns:

plotted image

Return type:

np.array

plotted_image = plot_overlaid_frame(image_frame, data_dict, frame_id, sensor_id, padding_width, padding_color, enable_darkening_image)
plot_overlaid_map(image_map: numpy.array, sensor_state_objects: List[SensorStateObject | None], global_people: GlobalObjects, global_amrs: GlobalObjects, rtls_log: List[str], frame_ids: List[int], amr_log: List[str], amr_frame_ids: List[int], frame_id: int, frame_id_offset: int = 0) numpy.array

Plots overlaid information on a map image

Parameters:
  • image_map (np.array) – image of floor plan

  • sensor_state_objects (List[Optional[SensorStateObject]]) – list of sensor state objects or None

  • global_people (GlobalObjects) – global person objects

  • global_amrs (GlobalObjects) – global AMR objects

  • rtls_log (List[str]) – RTLS log

  • frame_ids (List[int]) – frame IDs

  • amr_log (List[str]) – AMR log

  • amr_frame_ids (List[int]) – AMR frame IDs

  • frame_id (int) – frame ID

  • frame_id_offset (int) – frame ID offset

Returns:

plotted image

Return type:

np.array

plotted_image = plot_overlaid_map(image_map, sensor_state_objects, global_people, global_amrs, rtls_log, frame_ids, amr_log, amr_frame_ids, frame_id, frame_id_offset)
plot_ploygons(image_map: numpy.array, fov_polygon: str, translation_to_global_coordinates: Dict[str, float], scale_factor: float, map_height: int) numpy.array

Plots polygons

Parameters:
  • image_map (np.array) – image of floor plan

  • fov_polygon (str) – field-of-view polygon based on WKT format

  • translation_to_global_coordinates (Dict[str, float]) – translation to global coordinates

  • scale_factor (float) – scale factor

  • map_height (int) – frame height

Returns:

plotted image

Return type:

np.array

plotted_image = plot_ploygons(image_map, fov_polygon, translation_to_global_coordinates, scale_factor, map_height)
plot_sensor_fov(image_map: numpy.array, sensor_state_object: SensorStateObject, radius: float = 200, half_span: float = 40) numpy.array

Plots sensor FOV

Parameters:
  • image_map (np.array) – image of floor plan

  • sensor_state_object (SensorStateObject) – sensor state object

  • radius (float) – radius of the fan

  • half_span (float) – half span of the fan in degree

Returns:

plotted image

Return type:

np.array

plotted_image = plot_sensor_fov(image_map, sensor_state_object, radius, half_span)
plot_sensor_icon(image_map: numpy.array, sensor_state_object: SensorStateObject, radius: float = 50, half_span: float = 40)

Plots sensor icon

Parameters:
  • image_map (np.array) – image of floor plan

  • sensor_state_object (SensorStateObject) – sensor state object

  • radius (float) – radius of the fan

  • half_span (float) – half span of the fan in degree

Returns:

plotted image

Return type:

np.array

plotted_image = plot_sensor_icon(image_map, sensor_state_object, radius, half_span)
plot_sensor_icon_and_fov(image_map: numpy.array, sensor_state_objects: List[SensorStateObject | None])

Plots sensor icon and FOV

Parameters:
  • image_map (np.array) – image of floor plan

  • sensor_state_objects (List[Optional[SensorStateObject]]) – list of sensor state objects or None

Returns:

plotted image

Return type:

np.array

plotted_image = plot_sensor_icon_and_fov(image_map, sensor_state_objects)
read_json_data(json_data_path: str) Dict[int, Dict[str, Dict[str, Any]]]

Reads JSON data file

Parameters:

json_data_path (str) – JSON data file path

Returns:

dictionary of JSON data

Return type:

Dict[int,Dict[str,Dict[str,Any]]]

json_data_dict = read_json_data(json_data_path)
read_protobuf_data_with_amr_data(protobuf_data_path: str) Tuple[Dict[int, Dict[str, Dict[str, Any]]], List[str], List[int]]

Reads protobuf data file with AMR data

Parameters:

protobuf_data_path (str) – protobuf data file path

Returns:

dictionary of protobuf data, AMR log, and AMR frame IDs

Return type:

Tuple[Dict[int,Dict[str,Dict[str,Any]]],List[str],List[int]]

protobuf_data_dict, amr_log, amr_frame_ids = read_protobuf_data_with_amr_data(protobuf_data_path)
read_rtls_log(rtls_log_path: str) Tuple[List[str], List[int]]

Reads RTLS log file

Parameters:

rtls_log_path (str) – RTLS log file path

Returns:

RTLS log and frame IDs

Return type:

Tuple[List[str],List[int]]

rtls_log, frame_ids = read_rtls_log(rtls_log_path)
read_topview_video(topdown_video_path: str) cv2.VideoCapture

Reads top-view video

Parameters:

topdown_video_path (str) – top-view video file path

Returns:

top-view video capture

Return type:

cv2.VideoCapture

topview_video_capture = read_topview_video(topdown_video_path)
read_videos(video_dir_path: str) Dict[str, cv2.VideoCapture]

Reads videos

Parameters:

video_dir_path (str) – videos directory path

Returns:

map from video names to video captures

Return type:

Dict[str, cv2.VideoCapture]

map_video_name_to_capture = read_videos(video_dir_path)
shift_center(center: Tuple[int, int], radius: float, angle: float) Tuple[int, int]

Shifts center point of a fan

Parameters:
  • center (Tuple[int,int]) – center point

  • radius (float) – radius of the fan

  • angle (float) – angle of the fan in degree

Returns:

shifted center point

Return type:

Tuple[int,int]

shifted_center = shift_center(center, radius, angle)