A superpixel is a group of connected pixels that are similar in appearance. Superpixel segmentation divides an image into hundreds of non-overlapping superpixels (rather than thousands or millions of individual pixels.) By using superpixels, you can compute features on more meaningful regions, and you can reduce the number of input entities for consuming algorithms.
Superpixels can be computed based on visual appearance like color and texture. In addition, when depth data (RGB-D) is available, normals and depth can be used to create even better superpixel segmentations.
There are many different superpixel algorithms of varying complexity and performance. Isaac SDK comes with a GPU-friendly superpixel implementation for RGB-D images. The algorithm computes pixel-superpixel associations in parallel on the GPU.
The following diagram depicts the flow from camera input to superpixel segmentation. On the left are the color and depth input images. First 3D points, normals, and a mask of invalid pixels lying, for example, on edges, are computed. The superpixel algorithm then computes pixel-superpixel affinity maps which are integrated into the final superpixel segmentation shown on the right.
The example image above is taken from the dataset published as part of the following paper: A Large-Scale Hierarchical Multi-View RGB-D Object Dataset, Kevin Lai, Liefeng Bo, Xiaofeng Ren, and Dieter Fox, In IEEE International Conference on Robotics and Automation (ICRA), May 2011.
To try out superpixels you can run the image_superpixels sample application with the following command:
bob@desktop:~/isaac$ bazel run //packages/superpixels/apps:image_superpixels
The application runs the superpixel algorithm on a single test image and displays the results in
localhost:3000 to start WebSight and see the results.
The live_superpixels sample application computes superpixels from a live camera feed. Run the application with the following command:
bob@desktop:~/isaac$ bazel run //packages/superpixels/apps:live_superpixels
This sample application requires an RGB-D camera. It is set up by default to use an Intel Realsense camera.
Note that the RGB-D based superpixel algorithm relies on reasonable depth data. If a pixel has an
invalid depth value the algorithm does not use it for segmentation. Unused pixels are marked in
black in results from the sample applications. Additionally, pixels with large depth values are also
excluded from the segmentation to avoid very noisy superpixels. This parameter can be changed via
configuration files by adapting the
max_depth parameter of the