3D Object Pose Refinement

3D Object Pose Refinement plays a crucial role in applications like manipulation, where the position of detected objects affect the overall performance of the robot. The 3D Object Pose Refinement application in Isaac SDK provides the framework to test and run the refinement algorithm.

The algorithm used in this application is based on Iterative Closest Point (ICP) algorithm. It uses the symmetric ICP work by Rusinkiewicz, in which, for given surfaces P and Q, the point-to-plane error is determined more robustly using a symmetric function with both surface normals instead of treating surfaces as point and plane.

pose_refinement_application.png

3D Pose Refinement Application

Isaac SDK provides sample apps to run 3D Object Pose Refinement on RGBD data. Each sample app runs object detection and object pose estimation on an RGB image in parallel with superpixel generation on the RGBD image. The generated surflets are assigned object IDs in the surflet object assignment module. The output of these submodules are taken as input, which are optimized to match model-to-measurement surflets and generate the final refined pose as output.

The pose refinement application files are located in packages/object_pose_refinement/apps.

  1. To run pose refinement using a RealSense camera, use the following command:

    Copy
    Copied!
                

    bob@desktop:~/isaac$ bazel run packages/object_pose_refinement/apps:pose_refinement_camerafeed -- --config packages/object_pose_refinement/apps/pose_refinement_dolly.config.json --more packages/object_pose_estimation/apps/pose_cnn_decoder/detection_pose_estimation_cnn_inference_dolly.config.json


  2. To run replay on a previously collected log, use the following command:

    Copy
    Copied!
                

    bob@desktop:~/isaac$ bazel run packages/object_pose_refinement/apps:pose_refinement_replay -- --config packages/object_pose_refinement/apps/pose_refinement_dolly.config.json --more packages/object_pose_estimation/apps/pose_cnn_decoder/detection_pose_estimation_cnn_inference_dolly.config.json


  3. To run pose refinement on static images, use the following command:

    Copy
    Copied!
                

    bob@desktop:~/isaac$ bazel run packages/object_pose_refinement/apps:pose_refinement_imagefeed -- --config packages/object_pose_refinement/apps/pose_refinement_dolly.config.json --more packages/object_pose_estimation/apps/pose_cnn_decoder/detection_pose_estimation_cnn_inference_dolly.config.json


Sample data of the dolly object is provided, allowing you to run the last two apps using logs and images. You can visualize the refined pose in Sight at http://localhost:3000.

There are two types of visualization for the refined pose:

  • A 3D bounding box, which requires specification of both the 3D bounding box size at zero orientation and the transformation from the object center to the bounding box center. Configure these parameters in the viewers/ObjectRefinementViewer component in the application files.

  • A rendering of the CAD model in the scene, which requires the path to the object CAD model and file names. These correspond to the assetroot and assets parameters respectively in the websight component in the application files.

Visualization is also provided for debugging at object_pose_refinement.object_pose_refinement.PoseRefinement. It allows you to visualize the optimization steps in Sight and includes the following:

  • Measurement surflets(generated by superpixels)

  • Model surflets (loaded via the cask file)

  • 3D pose estimate positon of surflets

  • 3D refined position of surflets

© Copyright 2018-2020, NVIDIA Corporation. Last updated on Oct 31, 2023.