Mapping

In this tutorial, you will perform the following steps to generate maps for autonomous navigation:

mapping_tutorial_workflow.png

You will the following before starting the Mapping tutorial:

  • Carter V2.3 Robot with 2TB of available space

  • PC (with browser) connected to the same WiFi Network as the robot

  • Joystick controller, connected to the robot

  • Layout/map of the facility to be mapped (as a PDF); this map should be highlighted to specify where the stairs and other “No-Go” zones are (for semantic labeling)

  • Ethernet connectivity with RJ45 connector (for fast uploading of data)

  • Internet access

  • Sensor calibration file for the specific robot being used. Refer to the Sensor Calibration section for details.

  • Any specific guidelines you may have received from the NVIDIA Solutions team (e.g. the recommended data collection route)

To construct a map, data is collected from the stereo camera and Lidar on the Carter robot, which is equipped with an application that can facilitate this process. The application will record data from the front HAWK stereo camera and the 360 degree lidar. You can then upload the data to a server for map creation. A remote user interface will guide you through the data collection and upload process.

Typically, creating a map for a 100,000 square meter facility will require about 500GB of data.

Workflow Overview

map_collection_workflow.png


Data Recording and Upload

Refer to Record and Upload Tool for details on how to launch the recording app. It also provides instructions for collecting and uploading data.

We recommend performing a quick data validation before you carry out full data collection. This may take one work day, but can potentially save a lot of time.

Data Validation is optional but is highly recommended. Without data validation, you risk having to repeat data collection, which is cumbersome if the area to map is large.

Data validation allows you to confirm that the connection between robot and cloud storage is verified and that the data is usable for map creation.

Follow these simple rules for quick data collection.

  1. Launch the Record and Upload Tool.

  2. Press the START Button to start the Data Recorder application.

  3. Use the joystick controller to move the robot 10m straight ahead at speeds less than miles/hour (<1.3m/s). Note that Carter is limited to a speed of 1.1m/s, so the maximum speed should be sufficient.

    1. The operator should not move during this period.

  4. Stop, then continue moving again for another 2-3 seconds.

  5. Press the STOP button to stop the Recorder App and upload the data for validation.

  6. Share the name of the .pod file you uploaded with the NVIDIA Solutions team.

  7. The NVIDIA Solutions team will provide data validity approval, typically within one work day.

Full Data Collection

Once you obtain confirmation that the data is valid, you can perform full data collection.

Data Collection Guidelines

Here are some guidelines to follow while collecting data:

  • Collect data when the fewest people are around.

  • If possible, put blinds down on windows to reduce reflections.

  • As the operator, keep a minimum distance of 2 meters from the robot to avoid sensor occlusion (preferably ~10 meters).

  • As the operator, follow behind the robot as it moves.

  • Keep the Carter robot at least 10cm from walls and other surfaces.

  • Try to cover most of the space by moving the Carter robot to each corner.

  • When collecting in open spaces, sweep the space with routes that are 5 meters apart from each other (see the picture below).

  • When collecting in open spaces, ensure there is a loop closure (i.e. the same place is revisited).

  • Try to cover all spaces and complete the data collection in one session; if you need to use more than one session, try to start each new session from the previously stopped location.

full_data_collection_path.png


Data Collection Steps

  1. Start the Data Recorder application.

  2. Press the START Button to start the Data Recorder application.

  3. Move the robot as per guidelines above.

  4. Press the STOP button to stop the Recorder App and upload the data for validation.

  5. Share the name of the uploaded `.pod` file to the NV Solution Engineer.

  6. Quit the application from the PC once done.

Additional Guidelines:

  • Use Chrome if possible.

  • Record in an environment with good Wi-Fi coverage. Operating in areas with poor Wi-Fi may cause the user interface to become unresponsive. Refresh the browser if this occurs.

  • Connect to Ethernet before uploading data.

Triggering Map Creation

Once all data has been uploaded, send an Email to the NVIDIA Solutions team with the following information:

  • The project name (e.g. the name of the building).

  • The approximate latitude and longitude where the data collection occurred.

  • A Google Maps or OpenStreetMap screenshot indicating the approximate boundary of the data collection.

You will receive a notification once the occupancy map has been created. The occupancy map should be ready in about two work days after data collection is uploaded.

You can label the semantic map using the Map Annotation Tool. Alternatively, if it’s part of your contract, the NVIDIA Solutions team can provide the semantic map; the delivery time depends on the map size and can take one or two extra work days.

Downloading and Accessing the Map

The Mapping service provides three types of map:

  • 2D occupancy map

  • Semantic map

  • 3D occupancy map

If you have access to the Map Annotation Tool, you can use it to download the 2D occupancy map and semantic map. If not, you can request the NVIDIA Solutions team to provide the map files.

If you are following the Running the Navigation Stack with Isaac AMR Cloud Services workflow, and choose to use maps served by DeepMap, you need to find the Map UUID in the Annotation Tool and provide it as part of the Navigation Stack config. It will then automatically download the latest released 2D occupancy map and semantic map. Refer to Running the Navigation Stack with Isaac AMR Cloud Services section for more details.

Note

The output map (2D Occupancy map) is a result of the Lidar field of view (FOV). Therefore, stairs that descend, or other artifacts below the floor may not be included in the map if they were not observed by the robot lidar; these will need to be added in the semantic map.

The 3D occupancy map will be provided by the NVIDIA Solutions team in the form of a .ply file.

To sign up for the Map Annotation tool, contact the NVIDIA Isaac Mapping team and provide the following information:

  1. Email address

  2. User name

  3. Name of the company or organization

The NVIDIA Solutions team will create an account for the user, usually within 24 hours. You will then receive an email from no-reply-deepmap@nvidia.com with the initial password and user name. You can log in to the tool hosted at https://robotictools.deepmap.com/labeling/login

The first time you log in, you will be required to change the password.

Follow the steps in the Map Annotation Tool section to annotate the map.

© Copyright 2018-2023, NVIDIA Corporation. Last updated on Oct 30, 2023.