Local mapping

A local map is a simplified grid-based representation of the immediate environment around the robot. These simplified representations of the world are crucial to safely plan around static and dynamic obstacles near the robot. The local mapping pipeline takes as input sensor messages (like FlatscanProto) from the different sensors attached to the robot. Each sensor follows an independent firing pattern and can be arbitrarily oriented around the robot. The output of the local mapping pipeline is a unified distance map that is consumed by the planning stack. The distance map is a one-channel image that describes the distance of the robot from the closest obstacle on a grid map.

In order to fuse data from different sensors and generate a unified map, an intermediate grid based representation called Evidence Grid Map is used. Dempster–Shafer Theory helps construct an occupancy grid around the robot where each cell contains three floats : the belief mass that a cell is free, a belief mass that a cell is occupied and the left over mass that is assigned as uncertain. Each sensor message is parsed into an evidence grid map and all these individual grid maps are fused together to construct a unified evidence grid map of the environment.

Evidence Grid Map messages are transmitted as three-channel ImageProto messages and can be viewed with the EvidenceGridMapViewer codelet.

An representative factory environment and its corresponding unified evidence grid map is shown here:


White represents free cells, black represents occupied cells, and green represents uncertain cells.


The architecture of the local mapping pipeline is depicted here:


  • The structure of each evidence map is defined through its lattice definition. A lattice specifies the robot center, cell size, dimensions, and frames for the evidence grid map. The parameters in the corresponding LatticeGenerator codelets can be changed to alter the structure of the respective evidence grid maps.

  • On receiving a flatscan message, the RangeScanToEvidenceMap represents the range values through a bird’s eye evidence grid map. All grid cells before the hit point are marked free while those beyond the hit point are marked uncertain. The lattice proto specifies the center of the sensor on the grid map and helps visualize the sensor range values with respect to its own lattice.

  • Once a sensor evidence grid is generated, it is fused with the unified grid computed at the previous timestep in the EvidenceMapFusion codelet. The last computed fused map is transformed based on the relative movement of the robot between ticks. The sensor map is transformed based on the alignment of the sensor with respect to the robot (as specified in the robot model). Both the transformed maps are then stacked and their corresponding evidence grid values fused (interpolating wherever necessary). One of four fusion rules can be used by setting ISAAC_PARAM(FusionOperator, fusion_operator) to one of four strings: “pcr6” (default), “dempster_shafer”, “josang_average”, “josang_cumulative”.


    Currently the PCR6 fusion rule is CUDA-accelerated with atleast 4x faster results than equivalent CPU-based fusion implementations.

  • EvidenceMapInpaint paints areas in the evidence grid map with the desired evidence mass values. For example, if the sensor configuration has blind spots, the values in those areas can be manually set through this codelet.

  • EvidenceToBinaryMap converts the evidence map to a binary map based on parameterized thresholds for the free and occupied classes.

  • BinaryToDistanceMap codelet converts the binary map into a distance map to be used by the planner.

Previous The Navigation Algorithm
Next Remote Joystick using Sight
© Copyright 2018-2020, NVIDIA Corporation. Last updated on Apr 19, 2024.