Pick and Place Example Application

This package provides an application scaffold for pick-and-place scenarios. It features the high-level steps required for performing pick-and-place tasks and interfaces with two types of robotic manipulator: the UR10 arm and the Franka Emika arm. Features used in this example application include actuator control, object detection, and grasping.

Two different scenarios are included in this example application:

  • A UR10 arm picking and placing boxes from one pallet to another, using a suction cup grasping mechanism

  • A Franka Emika arm picking up colored cubes and placing them on a stack, then unstacking them again, using a two finger gripper

ovkit_ur10.png

ovkit_franka.png

To control both tasks, a central Isaac SDK app driven by a behavior tree is used. The behavior tree defines two tasks that work in tandem: a pick task to detect and grasp objects and a place task to position them at their destination poses.

The high level steps performed in the pick part of the tasks are as follows:

  1. Go to a pose from which objects can be seen.

  2. Detect objects in the field of view.

  3. Go to a pre-grasp pose to grasp an object (e.g. slightly above it).

  4. Open the arm’s gripper.

  5. Close in on the object to grasp it.

  6. Close the arm’s gripper.

  7. Lift the object from the surface.

After performing these steps, the object should be in the robot arm’s grasping mechanism, ready for a subsequent placing task.

For the place part, the required steps are as follows:

  1. Go to a pre-drop-off pose (e.g. slightly above the drop-off point).

  2. Go to the drop-off pose to place the grasped object.

  3. Open the arm’s gripper.

  4. Lift the arm away from the object.

Independent from the robotic manipulator used (and apart from the actual controller modules loaded) the scaffolding of the application stays the same and serves the purpose of showing how to build such an application using Isaac SDK.

The pick-and-place example uses Omniverse Kit, which simulates the UR10 and Franka Emika arm. To set up and start Omniverse, please refer to the respective documentation:

Once Omniverse is running, from the Content panel on the lower part of the Omniverse Kit window, select the address field and enter this URL to load the UR10 robot assets from the Omniverse server:

Copy
Copied!
            

omni:/Isaac/Samples/Isaac_SDK/Scenario/sortbot_sim.usd

To load the assets for the Franka Emika arm, use this URL:

Copy
Copied!
            

omni:/Isaac/Samples/Isaac_SDK/Scenario/franka_table.usd

Confirm with Return. Afterwards, on the Robot Engine Bridge panel, click Create Application. This starts the Isaac SDK backend.

ovkit_panels_bridge.png

Once the backend is running, start the simulation by clicking the Play button in the Omniverse window.

Viewport Camera Settings

Omniverse allows you to use different scene cameras as the source of the viewport image. In the Camera menu on top of the viewport, ensure that the Perspective camera is selected:

ovkit_camera.png


To run the application with a connection to Omniverse, execute one of the following commands:

  • UR10 box pick-and-place scenario:

    Copy
    Copied!
                

    bazel run //apps/samples/pick_and_place -- --arm ur10

  • Franka cube stacking/unstacking scenario:

    Copy
    Copied!
                

    bazel run //apps/samples/pick_and_place -- --arm franka --groundtruth

This will start the respective example application and move the simulated robot arm in Omniverse according to the selected scenario. For a full list of command line options, run:

Copy
Copied!
            

bazel run //apps/samples/pick_and_place --help

Viewport Camera for Object Detection

While the Franka cube stacking/unstacking tasks in this example use ground truth information about poses from the simulation, the UR10 scenario requires an RGB image from its camera frame to enable model-based object pose estimation. Once the UR10 arm moved towards the left pallet, it will stop and try to perceive the boxes on the pallet.

Using the same menu as above in Viewport Camera Settings, select the Camera entry in the Camera submenu:

ovkit_camera_2.png

The viewport will now show the synthetic image from the camera at the arm end effector, which will be used by the pose estimation codelet to determine the box poses. Once the arm properly detects the boxes and is closing in to grasp them, you can switch the camera back to Perspective.

© Copyright 2018-2020, NVIDIA Corporation. Last updated on Feb 1, 2023.