The goal of the Manual Extrinsic Calibration process is to provide the position and orientation of a set of sensors in the robot frame. This process includes the calibration of one sensor (e.g. LIDAR) with respect to the robot frame and multiple calibrations of sets of two or more sensors (e.g. camera-LIDAR or camera-camera) with overlapping fields of view.
The Isaac 2.0 manual calibration supports the following calibration options:
3D LIDAR - stereo camera
3D LIDAR - depth camera
3D LIDAR - ground
Manual Extrinsic Calibration requires static scenes, which you can collect with the Record and Upload tool.
The manual extrinsic calibration process involves the following steps (outlined in the image below):
Record static scenes as detailed in the sections below.
Contact NVDIA after uploading the data recordings and request the
After receiving the
isaac_calibration.jsonfile, deploy it to the corresponding robot in the following location:
Calibration files are validated by NVIDIA; there is no need to perform additional validation of the calibration files apart from deploying them.
Before using Manual Extrinsic Calibration, you need to launch the Recorder app
on your robot. When launching the Recorder app, modify the
--param argument as follows:
<YOUR_S3_BUCKET> with your AWS S3 bucket.
Calibration recordings must be uploaded to the root of
without any additional subfolders.
After launching the Record and Upload tool, fill in the following fields:
Drive the robot to six different scenes, as described in the Static Scenes section below. At each of the scenes, take a 1-second recording by pressing Start and Stop in the recording frontend. The outcome of this step are six 1-second recordings as
Upload the recorded
.podfiles using the Upload tab of the Record and Upload Tool. We recommend connecting the robot to a wired network via Ethernet cable before uploading the data.
If the calibration file is not present in
an ERROR message will be printed in the console.
Static Scene Guidelines
The Manual Extrinsic Calibration does not require calibration boards. Instead, it uses recordings of static and structured scenes in flat ground similar to the example scene below:
The requirements for each recording include the following:
A static and short (1-second) scene without any movement of the robot or objects around it
Empty space in a circle of 2.5m (9 feet) radius around the robot.
Multiple structured objects with sharp edges at different distances and locations in the image, in the areas that have LIDAR coverage (refer to the Good Examples below). There should be large distance variations between the foreground objects and their background.
Avoid cluttered scenes and objects with round edges (refer to the Bad Examples examples below), and minimize the number of sources of direct light, as well as specifically black foreground objects in the scene (dark objects are acceptable).
Examples of good geometries for the static scenes include hand rails, thin poles, flat planes like doors or cubicle walls, columns, bridges, and low-hanging ceilings or table tops with visible edging with the foreground:
Examples of bad geometries, which you should avoid in the static-scene recordings, include objects with round edges, like chairs and round poles, and cluttered scenes where it is difficult to find the edges of the foreground objects with different sensor modalities:
The Isaac 2.5 release is planned to support automated extrinsic calibration for 3D LIDAR and cameras in the carter robot. Isaac 2.5 will include a live data collection tool for calibration: An operator will be required to position a calibration board in certain locations in front of the robot sensors. The recording that includes the desired board positions will be used to generate a calibration file for the set of sensors.