# Overview¶

This following section contains a list of all components which are available in Isaac SDK. For each component, the incoming and outgoing message channels and the corresponding message types are listed. Additionally, all parameters with their names and types and corresponding default values are explained.

The following table gives an overview over all components. The columns ‘# Incoming’, ‘# Outgoing’ and ‘# Parameters’ indicate how many incoming message channels, outgoing message channels and parameters the corresponding module has.

Namespace Name # Incoming # Outgoing # Parameters
isaac ArgusCsiCamera 0 1 5
isaac AudioCapture 0 1 5
isaac AudioFileLoader 1 1 3
isaac AudioPlayback 1 0 0
isaac DummyStereoCamera 0 3 2
isaac DummyTensorListGenerator 0 1 4
isaac ImageComparison 2 0 2
isaac Joystick 0 1 7
isaac KayaBaseDriver 1 1 13
isaac PanTiltDriver 1 2 7
isaac RealsenseCamera 0 4 34
isaac RealsenseCameraSimple 0 2 6
isaac SegwayRmpDriver 1 1 5
isaac SerialBMI160 0 1 1
isaac SlackBot 1 1 2
isaac StereoVisualOdometry 3 0 1
isaac V4L2Camera 0 1 16
isaac VelodyneDriver 0 1 3
isaac Vicon 0 2 3
isaac ZedCamera 0 5 6
isaac.alice Config 0 0 0
isaac.alice Failsafe 0 0 1
isaac.alice FailsafeHeartbeat 0 0 3
isaac.alice MessageLedger 0 0 1
isaac.alice Pose 0 0 0
isaac.alice PoseInitializer 0 0 7
isaac.alice PyCodelet 0 0 1
isaac.alice Recorder 0 0 3
isaac.alice Replay 0 0 4
isaac.alice ReplayBridge 1 1 1
isaac.alice Scheduling 0 0 4
isaac.alice Sight 0 0 0
isaac.alice TcpPublisher 0 0 1
isaac.alice TcpSubscriber 0 0 4
isaac.alice Throttle 0 0 6
isaac.audio AudioEnergyCalculation 1 1 2
isaac.audio SoundSourceLocalization 1 1 4
isaac.audio VoiceCommandConstruction 1 1 6
isaac.audio VoiceCommandFeatureExtraction 1 1 11
isaac.dummies ImageLoader 0 2 9
isaac.flatsim DifferentialBasePhysics 1 1 5
isaac.flatsim DifferentialBaseSimulator 2 2 7
isaac.flatsim SimRangeScan 0 1 10
isaac.hgmm HgmmPointCloudMatching 1 0 9
isaac.imu IioBmi160 0 1 2
isaac.imu ImuCalibration2D 1 0 3
isaac.imu ImuCorrector 1 1 3
isaac.imu ImuSim 1 1 8
isaac.kinova_jaco KinovaJaco 2 4 2
isaac.map Map 0 0 2
isaac.map MapBridge 1 1 0
isaac.map OccupancyGridMapLayer 0 0 3
isaac.map PolygonMapLayer 0 0 2
isaac.map WaypointMapLayer 0 0 1
isaac.ml ColorCameraEncoder 1 1 3
isaac.ml DetectionDecoder 1 1 3
isaac.ml DetectionEncoder 1 1 2
isaac.ml HeatmapDecoder 1 1 2
isaac.ml HeatmapEncoder 1 1 0
isaac.ml SampleAccumulator 1 0 1
isaac.ml SegmentationDecoder 1 1 1
isaac.ml SegmentationEncoder 1 1 1
isaac.ml Teleportation 1 2 19
isaac.ml TensorReshape 1 1 1
isaac.ml TensorSynchronization 2 1 1
isaac.ml TensorflowInference 1 1 4
isaac.navigation BinaryToDistanceMap 1 1 6
isaac.navigation Cartographer 1 0 6
isaac.navigation DetectionUnprojection 2 1 3
isaac.navigation DifferentialBaseOdometry 1 1 5
isaac.navigation DifferentialBaseWheelImuOdometry 2 1 8
isaac.navigation FlatscanViewer 1 0 4
isaac.navigation FollowPath 2 1 4
isaac.navigation GoTo 1 2 10
isaac.navigation GridSearchLocalizer 1 0 6
isaac.navigation HolonomicBaseWheelImuOdometry 2 1 8
isaac.navigation LocalMap 1 1 9
isaac.navigation LocalizationEvaluation 0 0 0
isaac.navigation LocalizeBehavior 0 0 3
isaac.navigation MapWaypointAsGoal 1 1 2
isaac.navigation MapWaypointAsGoalSimulator 1 0 3
isaac.navigation MoveAndScan 1 1 1
isaac.navigation ObstacleWorld 0 0 0
isaac.navigation OccupancyToBinaryMap 1 1 3
isaac.navigation ParticleFilterLocalization 1 0 9
isaac.navigation ParticleSwarmLocalization 1 0 5
isaac.navigation Patrol 0 1 5
isaac.navigation PoseAsGoal 0 1 2
isaac.navigation PoseHeatmapGenerator 1 1 4
isaac.navigation RandomWalk 1 1 5
isaac.navigation RangeScanModelClassic 0 0 5
isaac.navigation RangeScanModelFlatloc 0 0 7
isaac.navigation RobotRemoteControl 2 1 8
isaac.navigation RobotViewer 0 0 1
isaac.navigation TravellingSalesman 0 1 5
isaac.perception AprilTagsDetection 1 1 3
isaac.perception CropAndDownsample 1 1 3
isaac.perception DepthImageFlattening 1 1 12
isaac.perception DepthImageToPointCloud 2 1 1
isaac.perception DisparityToDepth 2 1 0
isaac.perception FiducialAsGoal 1 2 6
isaac.perception FreespaceFromDepth 1 1 15
isaac.perception ImageUndistortion 1 1 2
isaac.perception RangeScanFlattening 1 1 5
isaac.perception RangeToPointCloud 1 1 3
isaac.perception ScanAccumulator 1 1 3
isaac.perception StereoDisparityNet 2 1 3
isaac.perception StereoImageSplitting 1 2 9
isaac.planner DifferentialBaseControl 1 1 9
isaac.planner DifferentialBaseLqrPlanner 2 1 31
isaac.planner DifferentialBaseModel 0 0 3
isaac.planner DifferentialBaseStop 0 1 0
isaac.planner GlobalPlanner 1 1 24
isaac.planner HolonomicBaseControl 1 1 7
isaac.pwm PwmController 2 0 2
isaac.sight AliceSight 0 0 0
isaac.sight WebsightServer 0 0 6
isaac.stereo_depth CoarseToFineStereoDepth 2 1 3
isaac.utils FlatscanToPointCloud 1 1 0
isaac.viewers ColorCameraViewer 1 0 3
isaac.viewers DepthCameraViewer 1 0 7
isaac.viewers DetectionsViewer 1 0 1
isaac.viewers MosaicViewer 0 0 3
isaac.viewers PointCloudViewer 1 0 4
isaac.viewers SegmentationCameraViewer 1 0 3
isaac.viewers SegmentationViewer 2 0 6
isaac.viewers TensorListViewer 1 0 6
isaac.yolo YoloTensorRTInference 1 1 1
navigation GMappingNode 2 0 13

# Components¶

## isaac.ArgusCsiCamera¶

Description

Interfaces the libargus library to support CSI camera. Only supported on L4T systems like the Jetson Nano.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• image [ColorCameraProto]: Channel to broad cast image extracted from argus feed

Parameters

• mode [int32_t] [default=]: Resolution mode of the camera. Supported values are: 0: 2592 x 1944, 1: 2592 x 1458, 2: 1280 x 720
• camera_id [int32_t] [default=]: System device numeral for the camera. For example select 0 for /dev/video0.
• framerate [int32_t] [default=]: desired framerate
• focal_length [Vector2d] [default=]: Focal length of the camera in pixels
• optical_center [Vector2d] [default=]: Optical center in pixels

## isaac.AudioCapture¶

Description

Isaac sensor codelet to capture and publish the audio data from a microphone. This reads audio data from an arbitrary number of microphones using the ALSA drivers on the linux distribution. The codelet can be configured to initialize the ALSA driver to capture audio with required sample rate, bit format and number of audio channels.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• audio_capture [AudioDataProto]: Captured audio data packets and their configuration is published.

Parameters

• capture_card_name [string] [default=]: Audio device name as string. Keep empty for default selection.
• sample_rate [int] [default=16000]: Sample rate of the audio data
• num_channels [int] [default=6]: Number of channels present in audio data
• audio_frame_in_milliseconds [int] [default=100]: Time duration of one audio frame
• ticks_per_frame [int] [default=5]: Number of times to query ALSA inside 1 audio frame duration

Description

Utility codelet to read raw PCM audio files from the filesystem and publish the contents as audio packets. This codelet can be used to load audio data from the files for playing system sounds or processing pre-recorded audio.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• audio_file_index [AudioFilePlaybackProto]: Index of the file from a pre-defined file list to be loaded.

Outgoing messages

• audio_data_publish [AudioDataProto]: Publish the audio data and its configuration from the requested file

Parameters

• pcm_filelist [std::vector<std::string>] [default=std::vector<std::string>()]: List of raw PCM audio files
• sample_rate [int] [default=16000]: Sample rate of the PCM audio files
• number_of_channels [int] [default=1]: Number of channels in the audio files

## isaac.AudioPlayback¶

Description

Isaac sensor codelet to play the received audio data on a speaker or any chosen playback device using the ALSA drivers in the Linux distribution. The ALSA driver is initialized with the audio configuration from the incoming message. This codelet drops any incoming messages until the previous playback is complete.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• audio_playback_input [AudioDataProto]: Receive the audio data to be played on the playback device.
Outgoing messages
(none)
Parameters
(none)

## isaac.DummyStereoCamera¶

Description

DummyStereoCamera publishes left and right color images and a left depth image with made up data.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• color_left [ColorCameraProto]: Random left color image
• color_right [ColorCameraProto]: Random right color image
• depth [DepthCameraProto]: Random depth image

Parameters

• rows [int] [default=1080]: The number of rows for generated data
• cols [int] [default=1920]: The number of columns for generated data

## isaac.DummyTensorListGenerator¶

Description

DummyTensorListGenerator creates lists of tensors from nothing.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• sample [TensorListProto]: Produced random list of tensors with the specified dimensions

Parameters

• num_tensors [int] [default=3]: Number of tensors per sample
• tensor_dim_1 [int] [default=3]: First dimension of the rank 3 tensor
• tensor_dim_2 [int] [default=640]: Second dimension of the rank 3 tensor
• tensor_dim_3 [int] [default=480]: Third dimension of the rank 3 tensor

## isaac.ImageComparison¶

Description

Compare two images and report the correlation (similarity) between them

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_image_a [ColorCameraProto]: First input image
• input_image_b [ColorCameraProto]: Second input image
Outgoing messages
(none)

Parameters

• correlation_threshold [float] [default=0.99]: The minimum correlation between two images where we will consider them the same
• down_scale_factor [int] [default=4]: Scaling of the displayed images in Sight

## isaac.Joystick¶

Description

Publishes state for a joystick like an Xbox gamepad or other input device.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• js_state [JoystickStateProto]: The joystick message

Parameters

• deadzone [double] [default=0.05]: Size of the “deadzone” region, applied to both positive and negative values for each axis. For example, a deadzone of 0.05 will result in joystick readings in the range [-0.05, 0.05] being clamped to zero. Readings outside of this range are rescaled to fully cover [-1, 1]. In other words, the range [0.05, 1] is linearly mapped to [0, 1], and likewize for negative values.
• num_axes [int] [default=4]: Number of joystick axes (e.g., 4 axes might correspond to two 2-axis analogue sticks)
• num_buttons [int] [default=12]: Number of joystick buttons
• reconnect_interval [double] [default=1.0]: Reconnect interval, in seconds. This is the period between joystick connection attempts (i.e., attempts to open the joystick device file) when the initial attempt fails.
• input_timeout_interval [double] [default=0.1]: Input timeout interval, in seconds. This determines how long tick() will wait for input before giving up until tick() is called again. Note that stop() cannot succeed while tick() is waiting for input, so this timeout value should not be overly long.
• device [string] [default=”/dev/input/js0”]: Joystick device file (system-dependent)
• print_unsupported_buttons_warning [bool] [default=false]: Option controlling whether a warning will be logged when an event is received from an axis or button whose index exceeds num_axes or num_buttons, respectively

## isaac.KayaBaseDriver¶

Description

Driver for Kaya robot based on holonomic wheels with Dynamixel motors. Every Kaya has two servo motors in front, and one in the back. This codelet turns these motors with desired speed values and transmits their turn rate back as a message.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• command [StateProto]: The holonomic command to be sent to the motors

Outgoing messages

• state [StateProto]: The state of the holonomic base

Parameters

• usb_port [string] [default=”/dev/ttyUSB0”]: USB port where Dynamixel controller is located at. usb_port varies depending on the controller device, e.g., “/dev/ttyACM0” or “/dev/ttyUSB0”
• baudrate [int] [default=1000000]: Baud rate of the Dynamixel bus. This is the rate of information transfer.
• servo_front_left [int] [default=1]: Unique identifier for Dynamixel servo at front left. Each motor needs to be assigned a unique ID using the software provided by Dynamixel.
• servo_front_right [int] [default=3]: Unique identifier for Dynamixel servo at front right. Each motor needs to be assigned a unique ID using the software provided by Dynamixel.
• servo_back [int] [default=2]: Unique identifier for Dynamixel servo at back. Each motor needs to be assigned a unique ID using the software provided by Dynamixel.
• torque_limit [double] [default=kDefaultTorqueLimit]: Servo maximum torque limit. Caps the amount of torque the servo will apply. 0.0 is no torque, 1.0 is max available torque
• wheel_base_length [double] [default=0.125]: Distance of the wheel from robot center of mass. This value is used in kinematic computations.
• wheel_radius [double] [default=0.03]: Wheel radius. This value is used in kinematic computations.
• max_safe_speed [double] [default=0.3]: Max safe speed
• max_angular_speed [double] [default=0.3]: Max turning rate
• debug_mode [bool] [default=false]: If debug_mode is true, debug_servos will rotate with debug_speed.
• debug_speed [double] [default=100]: If debug_mode is true, debug_servos will rotate with debug_speed.
• debug_servos [std::vector<int>] [default=std::vector<int>({1, 2, 3})]: If debug_mode is true, debug_servos will rotate with debug_speed.

## isaac.PanTiltDriver¶

Description

The PanTiltDriver class is a driver for a pan/tilt unit based on Dynamixel motors.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• command [StateProto]: Current command for pan/tilt unit

Outgoing messages

• state [StateProto]: The state of the pan tilt unit
• motors [DynamixelMotorsProto]: State of Dynamixel motors

Parameters

• use_speed_control [bool] [default=true]: If set to true dynamixels are controlled in speed mode, otherwise they are controlled in position mode
• usb_port [string] [default=”/dev/ttyUSB0”]: USB port used to connect to the bus (A2D2 USB adapter)
• pan_servo_id [int] [default=1]: Dynamixel ID for pan servo
• tilt_servo_id [int] [default=2]: Dynamixel ID for tilt servo
• pan_target [double] [default=3.30]: Dynamixel target joint angle for pan motion
• tilt_target [double] [default=4.83]: Dynamixel target joint angle for tilt motion
• baudrate [int] [default=1000000]: Baudrate of the Dynamixel bus

## isaac.RealsenseCamera¶

Description

Isaac codelet for Realsense D435 camera, that comes with two sensors (color and depth) Please note the color camera supports more image formats (RGB8, Y16, BGRA8, RGBA8, BGR8, YUYV). In this codelet we only support RGB8 and Y16. Support for other formats can be added as necessary. These resolutions and settings have been tested on firmware version 05.10.03 To update/downgrade the firmware version on the sensor, follow the steps from this page https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Linux-RealSense-D400-DFU-Guide.pdf Color sensor supported modes 1 : 1920x1080@ 30Hz, 15Hz, 6Hz 2 : 1280x720@ 30Hz, 15Hz, 6Hz 3 : 960x540@ 60Hz, 30Hz, 15Hz, 6Hz 4 : 848x480@ 60Hz, 30Hz, 15Hz, 6Hz 6 : 640x480@ 60Hz, 30Hz, 15Hz, 6Hz 7 : 640x360@ 60Hz, 30Hz, 15Hz, 6Hz 8 : 424x240@ 60Hz, 30Hz, 15Hz, 6Hz 9 : 320x240@ 60Hz, 30Hz, 15Hz, 6Hz 10 : 320x180@ 60Hz, 30Hz, 15Hz, 6Hz Ir (left and right) stream supported modes 1 : Infrared 1280x800@ 30Hz, 15Hz (Not supported when depth stream is active) 2 : Infrared 1280x720@ 30Hz, 15Hz, 6Hz 3 : Infrared 848x480@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 4 : Infrared 640x480@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 4 : Infrared 640x360@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 5 : Infrared 480x270@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 6 : Infrared 424x240@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz Depth stream supported modes 1 : Depth 1280x720@ 30Hz, 15Hz, 6Hz 2 : Depth 848x480@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 3 : Depth 640x480@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 4 : Depth 640x360@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 5 : Depth 480x270@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz 6 : Depth 424x240@ 90Hz, 60Hz, 30Hz, 15Hz, 6Hz

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• color [ColorCameraProto]: Color camera image. This can be Image3ub(for color) or Image1ui16 (for grayscale)
• ir_left [ColorCameraProto]: Left Ir camera image
• ir_right [ColorCameraProto]: Right Ir camera image
• depth [DepthCameraProto]: Depth image (in meters). This is in left Ir camera frame.

Parameters

• index [int] [default=0]: Index of the camera, when working with multiple realsense cameras. Each realsense camera consists of an RGB sensor and two IR sensors, each of which get enumerated as /dev/video*. So we cannot use /dev/video* (like ZedCamera and V4L2Camera) to uniquely identify a d435 device. The index here identifies the index of the camera in a list that is sorted by serial number of the camera.
• color_mode [ColorMode] [default=kRgb]: Color camera mode. Cannot be changed after the device starts
• color_size [Vector2i] [default=Vector2i(1080, 1920)]: Color image size. Cannot be changed after the device starts
• color_fps [int] [default=30]: Color camera frame rate. Cannot be changed after the device starts
• enable_depth [bool] [default=true]: Enable/disable depth stream. Cannot be changed after the device starts
• enable_ir_left [bool] [default=false]: Enable/disable left IR stream. Cannot be changed after the device starts
• enable_ir_right [bool] [default=false]: Enable/disable right IR stream. Cannot be changed after the device starts
• depth_size [Vector2i] [default=Vector2i(720, 1280)]: Depth/IR image size . Cannot be changed after the device starts
• depth_fps [int] [default=30]: Depth/IR frame rate. Cannot be changed after the device starts
• color_backlight_compensation [int] [default=0]: Enable/Disable Color Backling Compensation. Range [0, 1].
• color_brightness [int] [default=0]: Color Image Brightness. Range [-64, 64].
• color_contrast [int] [default=50]: Color Image Contrast. Range[0, 100]
• color_enable_auto_exposure [int] [default=1]: Enable/disable color image auto-exposure. Range [0, 1].
• color_exposure [int] [default=166]: Controls exposure time of color camera. Setting any value will disable auto exposure Range [41, 10000].
• color_gain [int] [default=64]: Color Image Gain. Range [0, 128].
• color_gamma [int] [default=300]: Color Image gamma setting. Range [100, 500].
• color_hue [int] [default=0]: Color Image hue. Range [-180, 180].
• color_saturation [int] [default=64]: Color Image Saturation. Range [0, 100].
• color_sharpness [int] [default=50]: Color Image Sharpness. Range [0, 100].
• color_white_balance [int] [default=4600]: Controls white balance of color image. Setting any value will disable auto white balance Range [2800, 6500]. Step size 10
• color_enable_auto_white_balance [int] [default=1]: Enable/Disable auto white balance. Range [0, 1].
• color_frames_queue_size [int] [default=2]: Max number of frames you can hold at a given time. Increasing this number will reduce frame drops but increase latency, and vice versa. Range [0, 32].
• color_power_line_frequency [int] [default=3]: Power Line Frequency control for anti-flickering Off/50Hz/60Hz/Auto. Range [0, 2].
• color_auto_exposure_priority [int] [default=0]: Allows sensor to dynamically ajust the frame rate depending on lighting conditions Range [0, 1].
• depth_exposure [int] [default=8500]: Depth sensor exposure time in microseconds. Setting any value will disable auto exposure Range [0, 166000]. Step size 20
• depth_enable_auto_exposure [int] [default=1]: Enable/disable depth image auto-exposure. Range [0, 1].
• depth_gain [int] [default=16]: Depth Image Gain. Range [16, 248].
• depth_visual_preset [int] [default=0]: Provide access to several recommended sets of option presets for the depth camera Range [0, 6].
• depth_laser_power [int] [default=150]: Manual laser power in mw. applicable only when laser power mode is set to Manual Range [0, 360]. Step size 30
• depth_emitter_enabled [bool] [default=false]: Power Control for D400 Projector, 0-off, 1-on, (2-deprecated) Range [0, 1].
• depth_frames_queue_size [int] [default=2]: Max number of frames you can hold at a given time. Increasing this number will reduce frame drops but increase latency, and vice versa. Range [0, 32].
• depth_error_polling_enabled [int] [default=1]: Enable / disable polling of camera internal errors Range [0, 1].
• depth_output_trigger_enabled [int] [default=0]: Generate trigger from the camera to external device once per frame Range [0, 1].
• depth_inter_cam_sync_mode [int] [default=1]: Inter-camera synchronization mode: 0:Default, 1:Master, 2:Slave Range [0, 2].

## isaac.RealsenseCameraSimple¶

Description

RealsenseCameraSimple is an Isaac codelet for the Realsense D435 camera that provides color and depth images. The sensor can also provide raw IR images, however this is currently not supported.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• color [ColorCameraProto]: A color camera image that can be Image3ub(for color) or Image1ui16 (for grayscale.)
• depth [DepthCameraProto]: Depth image (in meters). This is in left Ir camera frame

Parameters

• rows [int] [default=360]: The resolution of captured images. Valid choices are: 1280x720 (at most 30 Hz) 848x480 640x480 640x360 480x270 424x240 The camera can also produce images also at 1920x1080, however this is currently not supported as color and depth are set to the same resolution. Number of pixels in the height dimension
• cols [int] [default=640]: Number of pixels in the width dimension
• framerate [int] [default=30]: The framerate of camera / depth image acquisition. Valid choices are: 60, 30, 15, 6. Note that the depth camera supports higher framerates, however this is currently not supported as color and depth are set to the same resolution.
• align_to_color [bool] [default=true]: If enabled the depth image is aligned to the color image to provide matching color and depth values for every pixel.
• frame_queue_size [int] [default=2]: Max number of frames you can hold at a given time. Increasing this number reduces frame drops but increase latency, and vice versa; ranges from 0 to 32.
• auto_exposure_priority [bool] [default=false]: Limit exposure time when auto-exposure is ON to preserve constant fps rate.

## isaac.SegwayRmpDriver¶

Description

A driver for the Segway RMP base.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• segway_cmd [StateProto]: Linear and angular speed command for driving segway (navigation::DifferentialBaseControl type)

Outgoing messages

• segway_state [DifferentialBaseStateProto]: State of the segway consisting of linear and angular speeds and accelerations

Parameters

• ip [string] [default=”192.168.0.40”]: Isaac will use this IP to talk to segway
• port [int] [default=8080]: Isaac will use this port to talk to segway
• flip_orientation [bool] [default=true]: If true, segway’s forward direction will be flipped
• speed_limit_linear [double] [default=1.1]: Maximum linear speed segway is allowed to travel with
• speed_limit_angular [double] [default=1.0]: Maximum angular speed segway is allowed to rotate with

## isaac.SerialBMI160¶

Description

SerialBMI160 is a driver that uses a serial connection to talk to the BMI160 Internal Measurement Unit (IMU).

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• imu [ImuProto]: IMU data including linear accelerations and angular velocities

Parameters

• device [string] [default=”/dev/ttyUSB0”]: Dev path where for the IMU device

## isaac.SlackBot¶

Description

A SlackBot to perform authentication and listen for incoming commands

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• slack_message [ChatMessageProto]: Messages to be sent to the slack server

Outgoing messages

• user_instruction [ChatMessageProto]: Messages received from the slack server

Parameters

• bot_token [string] [default=]: Slack bot token given on the slack app config page
• slack_connect_url [string] [default=”https://slack.com/api/rtm.connect”]: Slack URL we will be sending the connection request too

## isaac.StereoVisualOdometry¶

Description

This is a Stereo Visual Odometry codelet based on an implementation from nvidia. The input to the codelet is left and right grayscale image image pair with known intrinsics and extrinsics(relative tranformation between cameras) The output of the codelet is a 6DOF pose of the left camera. The co-ordinate frame for this 6DOF pose is X front, Y left and Z up. Front here means the direction of the optical axis of the camera

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• left [ColorCameraProto]: Gray input images. Images should be rectified and undistorted prior to being passed in here. Gray input left image
• right [ColorCameraProto]: Gray input right imag.e
• extrinsics [Pose3dProto]: camera pair extrinsics
Outgoing messages
(none)

Parameters

• num_points [int] [default=100]: number of points to include in the pose trail debug visualization

## isaac.V4L2Camera¶

Description

V4L2Camera is a camera driver implemented using V4L2. Currently this driver only accepts images from cameras in yuyv and automatically converts them to RGB.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• frame [ColorCameraProto]: Each frame output by the camera

Parameters

• device_id [int32_t] [default=0]: Which camera should be opened
• rows [int32_t] [default=720]: Parameters of the image requested by the camera These must match exactly with what the camera is able to produce. Number of pixels in the height dimension
• cols [int32_t] [default=1280]: Number of pixels in the width dimension
• rate_hz [int32_t] [default=30]: Frames per second.
• hardware_image_queue_length [int32_t] [default=3]: Buffers are queued with the V4L2 driver so that the driver can write out images at the specified frame rate without delays. This may be changed by the camera when we are initializing.
• focal_length [Vector2d] [default=(Vector2d{700.0, 700.0})]: Focal length (in pixels) for the pinhole camera model
• optical_center [Vector2d] [default=(Vector2d{360.0, 640.0})]: Optical center of the projection for the pinhole camera model
• brightness [int32_t] [default=128]: Adjustable camera parameters. v4l2-ctl can be used to check values, e.g., “v4l2-ctl –device=/dev/video0 –list-ctrls”. Descriptions below are taken from video4linux API documentation. Picture brightness, or more precisely, the black level. Needs to be between 0 and 255
• contrast [int32_t] [default=128]: Picture contrast or luma gain. Needs to be between 0 and 255
• saturation [int32_t] [default=128]: Picture color saturation or chroma gain. Needs to be between 0 and 255
• gain [int32_t] [default=0]: Gain control. Needs to be between 0 255
• white_balance_temperature_auto [bool] [default=true]: If true, white balance temprature will be automatically adjusted.
• white_balance_temperature [int32_t] [default=4000]: This control specifies the white balance settings as a color temperature in Kelvin. White balance temperature needs to be between 2000 abd 6500. This parameter is inactive if white_balance_temperature_auto is true
• exposure_auto [int32_t] [default=3]: Exposure time and/or iris aperture. 0: Automatic exposure time, automatic iris aperture. 1: Manual exposure time, manual iris. 2: Manual exposure time, auto iris. 3: Auto exposure time, manual iris.
• exposure_absolute [int32_t] [default=250]: Determines the exposure time of the camera sensor. The exposure time is limited by the frame interval. Drivers should interpret the values as 100 mus units, where the value 1 stands for 1/10000th of a second, 10000 for 1 second and 100000 for 10 seconds. Valid values are between 3 and 2047.
• use_cuda_color_conversion [bool] [default=true]: Whether to convert from yuyv to RGB using CUDA, otherwise the CPU is for the conversion.

## isaac.VelodyneDriver¶

Description

A driver for the Velodyne VLP16 Lidar.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• scan [RangeScanProto]: A range scan slice published by the Lidar

Parameters

• ip [string] [default=”192.168.2.201”]: The IP address of the Lidar device
• port [int] [default=2368]: The port at which the Lidar device publishes data.
• type [drivers::VelodyneModelType] [default=drivers::VelodyneModelType::VLP16]: The type of the Lidar (currently only VLP16 is supported).

## isaac.Vicon¶

Description

This codelet publishes motion capture information from a Vicon Datastream. Use of this codelet requires a Vicon Datastream connected to camera equipment. It allows tracking of marker information and rigid body information relative to a world frame that can be user-defined during setup of the Vicon hardware.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• vicon_pose_tree [PoseTreeProto]: Pose tree message containing information from Vicon scene volume
• vicon_markers [MarkerListProto]: Marker list message containing all markers visible in Vicon scene volume

Parameters

• vicon_hostname [string] [default=”localhost”]: Hostname of the Vicon system
• vicon_port [string] [default=”801”]: Port to which the Vicon data is streaming
• reconnect_interval [double] [default=1.0]: Amount of time to wait before attempting to reconnect to the Vicon system

## isaac.ZedCamera¶

Description

Provides stereo image pairs and calibration information from a ZED camera

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• left_camera_rgb [ColorCameraProto]: left rgb image and camera intrinsics
• right_camera_rgb [ColorCameraProto]: right rgb image and camera intrinsics
• left_camera_gray [ColorCameraProto]: left gray image and camera intrinsics
• right_camera_gray [ColorCameraProto]: right gray rgb image and camera intrinsics
• extrinsics [Pose3dProto]: camera pair extrinsics (right-to-left)

Parameters

• resolution [sl::RESOLUTION] [default=sl::RESOLUTION_VGA]: The resolution to use for the ZED camera. The following values can be set: RESOLUTION_HD2K: 2208x1242 RESOLUTION_HD1080: 1920x1080 RESOLUTION_HD720: 1280x720 RESOLUTION_VGA: 672x376
• device_id [int] [default=0]: The numeral of the system video device of the ZED camera. For example for /dev/video0 choose 0.
• gray_scale [bool] [default=false]: Turns on gray scale images
• rgb [bool] [default=true]: Turns on RGB color images
• settings_folder_path [string] [default=”./”]: The folder path to the settings file (SN#####.conf) for the zed camera. This file contains the calibration parameters for the camera.
• gpu_id [int] [default=0]: The GPU device to be used for ZED CUDA operations

## isaac.alice.Config¶

Description

Stores node configuration in form of key-value pairs. This component is added to every node by default and does not have to be added manually. The config component is used by other components and the node itself to store structure and state. Most notable config can be used directly in codelets to access custom configuration values. Support for basic types and some math types is built-in. Configuration is stored in a group-key-value format. Each component and the node itself are defining separate groups of key-value pairs. Additionally custom groups of configuration can be added by the user.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)
Parameters
(none)

## isaac.alice.Failsafe¶

Description

A soft failsafe switch which can be used to check if a certain component of the system is still reactive. The failsafe is kept alive by a FailsafeHeartbeat component. Failsafe and FailsafeHeartbeat components can be in different nodes. They are identified via the name parameter.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• name [string] [default=]: the name of the failsafe

## isaac.alice.FailsafeHeartbeat¶

Description

A soft heartbeat which can be used to keep a failsafe alive. If the heartbeat is not activated in time the corresponding failsafe will fail.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• interval [double] [default=]: The expected heart beat interval (in seconds). This is the time duration for which the heartbeat will stay activated after a single activation. The heartbeat needs to be activated again within this time interval, otherwise the corresponding Failsafe will fail.
• failsafe_name [string] [default=]: The name of the failsafe to which this heartbeat is linked. This must be the same as the name parameter in the corresponding Failsafe component.
• heartbeat_name [string] [default=]: The name of this heartbeat. This is purely for informative purposes.

## isaac.alice.MessageLedger¶

Description

Stores time histories of messages for various channels of this node and distributes messages between various systems. Every node which engages in message passing must have a MessageLedger component.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• history [int] [default=10]: The maximum number of messages to hold in the history

## isaac.alice.Pose¶

Description

Provides convenience functions to access 3D transformations from the application wide pose tree. This component is added to every node by default and does not have to be added manually. Poses use 64-bit floating point types and are 3-dimensional. All coordinate frames for the whole application are stored in a single central pose tree. All functions below accept two coordinate frames: lhs and rhs. This refers to the pose lhs_T_rhs which is the relative transformations between these two coordinate frames. In particular the following equations hold: p_lhs = lhs_T_rhs * p_rhs a_T_c = a_T_b * b_T_c Not all coordinate frames are connected. If this is the case or either of the two coordinate frames does not exist the pose is said to be “invalid”.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)
Parameters
(none)

## isaac.alice.PoseInitializer¶

Description

A codelet which creates a 3D transformation in the pose tree between two reference frames. This can for example be used to set transformations which never change or to set initial values for transformations.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
• rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
• pose [Pose3d] [default=]: Transformation lhs_T_rhs
• attach_interactive_marker [bool] [default=false]: If enabled the pose is editable via an interactive marker.
• add_yaw_degrees [double] [default=0.0]: Additional yaw angle around the Z axis in degrees. Currently only enabled if attach_interactive_marker is false.
• add_pitch_degrees [double] [default=0.0]: Additional pitch angle around the Y axis in degrees. Currently only enabled if attch_interactive_marker is false.
• add_roll_degrees [double] [default=0.0]: Additional roll angle around the X axis in degrees. Currently only enabled if attch_interactive_marker is false.

## isaac.alice.PyCodelet¶

Description

PyCodelet is a C++ Codelet instance for Python codelets that synchronizes with a Python codelet to mimic the effect of embedding Python scripts into the C++ codelet.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• config [json] [default=nlohmann::json({})]: Parameter for getting Isaac parameters to pyCodelets. For details, see PybindPyCodelet.

## isaac.alice.Recorder¶

Description

Stores data for in a log file. This component can for example be used to write incoming messages to a log file. The messages can then be replayed using the Replay component. In order to record a message channel setup an edge from the publishing component to the Recorder component. The source channel is the name of the channel under which the publishing component publishes the data. The target channel name on the Recorder component can be choosen freely. When data is replayed it will be published by the Replay component under that same channel name. Warning: Please note that the log container format is not yet final and that breaking changes might occur in in the future. The root directory used to log data is base_directory/exec_uuid/tag/… where both base_directory and tag are configuration parameters. exec_uuid is a UUID which changed for every execution of an app and is unique over all possible executions. If tag is the empty string the root log directory is just base_directory/exec_uuid/…. Multiple recorders can write to the same root log directory. In this case they share the same key-value database. However only one recorder is allowed per log series. This means if the same component/key channel is logged by two different recorders they can not write to the same log directory.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• base_directory [string] [default=”/tmp/isaac”]: The base directory used as part of the log directory (see class comment)
• tag [string] [default=”“]: A tag used as part of the log directory (see class comment)
• enabled [bool] [default=true]: Can be used to disable logging.

## isaac.alice.Replay¶

Description

Replays data from a log file which was recorded by a Recorder component. See the documentation for the Recorder component for more information.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• cask_directory [string] [default=”“]: The cask directory used to replay data from
• replay_time_offset [int64_t] [default=0]: Time offset to start a replay from between a log
• use_recorded_message_publish_time [bool] [default=false]: Decides whether to use recorded message pubtime or replay current time as pubtime
• loop [bool] [default=false]: If this is enabled replay will start from the beginning when it was replayed

## isaac.alice.ReplayBridge¶

Description

Communication Bridge between WebsightServer and Replay Node

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• request [nlohmann::json]: Request to replay node

Outgoing messages

• reply [nlohmann::json]: Reply from replay node

Parameters

• replay_component_name [string] [default=]: Replay component name in format node/component. Ex: replay/isaac.alice.Replay

## isaac.alice.Scheduling¶

Description

This component contains scheduling information for codelets. Parameters apply to all components in a node. If the component is not present, default parameters are used.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• priority [int] [default=0]: Controls the relative priority of a codelet task within a timeslice window Used for periodic and event driven codelets. Higher values have higher priority
• slack [double] [default=0]: Controls how much variation in start time is allowed when executing a codelet Used for periodic and event driven codelets. The parameter unit is seconds
• deadline [double] [default=]: Set the expected time that the codelet will take to complete processing. If no value is specified periodic tasks will assume the period of the task and other tasks will assume there is no deadline. The parameter unit is seconds
• execution_group [string] [default=”“]: Sets the execution group for the codelet. Users can define groups in the scheduler configuration. If an execution_group is specified it overrides default behaviors. If no value is specified it will attempt to use the default configuration The default configuration provided creates three groups -BlockingGroup – Blocking threads run according to OS scheduling. Default for tickBlocking. -WorkerGroup – One Worker thread per core execute tick functions for tickPeriodic/OnEvent. Note: tickBlocking spawns a worker thread for the blocking task which if executed in the WorkerGroup can interfere with worker thread execution due to OS scheduling. Removing the default groups could lead to instabilities if not careful.

## isaac.alice.Sight¶

Description

This component is a proxy to access and expose sight functionalities to components. This component is added to every node by default. It should not be added to a node manually.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)
Parameters
(none)

## isaac.alice.TcpPublisher¶

Description

Sends messages via a TCP network socket. This components waits for clients to connect and will forward all messages which are sent to it to connected clients.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• port [int] [default=]: The TCP port number used to wait for connections and to publish messages.

## isaac.alice.TcpSubscriber¶

Description

Receifves messages from a TCP network socket. This components connects to a socket and will publish all messages it receives on the socket.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• host [string] [default=]: The IP adress of the remote host from which messages will be received.
• port [int] [default=]: The TCP port number on which the remove host is publishing messages.
• reconnect_interval [double] [default=0.5]: If a connection to the remote can not be established or breaks we try to restablish the connection at this interval (in seconds).
• update_pubtime [bool] [default=true]: If set to true publish timestamp will be set when the message is received; otherwise the original publish timestamp issued by the remote will be used.

## isaac.alice.Throttle¶

Description

Throttles messages on a data channel. If use_signal_channel is enabled a signal channel is used as a heartbeat. Messages on the data channel will only be published whenever a message on the signal channel was received. In any case minimum_interval is used to additionally throttle the output.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• data_channel [string] [default=]: The name of the data channel to be throttled
• output_channel [string] [default=]: The name of the output data channel with throttled data
• minimum_interval [double] [default=0.0]: The minimal time period after which a message can be published again on the data channel.
• use_signal_channel [bool] [default=true]: If enabled the signal channel will define which incoming messages are passed on. This enables the parameters signal_channel and acqtime_tolerance.
• signal_channel [string] [default=]: The name of the signal channel used for throttling
• acqtime_tolerance [int] [default=]: The tolerance on the acqtime to match data and signal channels. If this parameter is not specified the latest available message on the data channel will be taken.

## isaac.audio.AudioEnergyCalculation¶

Description

Feature codelet to compute average energy per audio packet. The energy is averaged over the configured list of channels for each audio packet. This energy is measured in decibels (dB).

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• audio_packets [AudioDataProto]: Receive the multi-channeled audio packets for computing the energy.

Outgoing messages

• audio_energy [StateProto]: The average energy in dB per audio packet is published.

Parameters

• channel_indices [std::vector<int>] [default=]: Indices of the audio channels which are used for calculating the audio energy
• reference_energy [double] [default=0]: Reference energy in decibels (dB). The energy of the audio packet is computed w.r.t. this reference energy. This is usually the Acoustic Overload Point or maximum dB value mentioned in the specification sheet of the microphone.

## isaac.audio.SoundSourceLocalization¶

Description

Feature codelet to compute the direction of the dominant sound source from the incoming audio data packets. The direction is measured as angle in radians from the reference axis. This currently supports only circular microphone arrays with at least 4 microphones.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• audio_packets [AudioDataProto]: Receive the multi-channeled audio packets for computing the direction.

Outgoing messages

• audio_angle [StateProto]: Azimuth angle of the dominant sound source with respect to the reference axis (measured anti-clockwise) is published.

Parameters

• audio_duration [float] [default=0.5f]: Duration (in seconds) of the audio data used for computation of the azimuth angle. The milliseconds equivalent of this value should be an integral multiple of the input audio duration in milliseconds.
• microphone_distance [float] [default=0.0f]: Distance between two diagonally opposite microphones on the microphone array.
• microphone_pairs [std::vector<Vector2i>] [default=]: Pairs of indices of the audio channels corresponding to microphone elements.
• reference_offset_angle [int] [default=0]: Angle of first diagonaly opposite microphone pair with respect to the reference axis.

## isaac.audio.VoiceCommandConstruction¶

Description

Feature codelet to detect commands from series of keyword probabilities (received as 2D tensors). This codelet analyzes if the keywords detected form any of the the defined commands. If any command is identified, the command id along with a series of timestamps of audio packets that contributed to the command are published.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• keyword_probabilities [TensorListProto]: Receive keyword probabilities (generally produced by tensorflow inference) as 2D tensors. Only tensors with first dimension as 1 are accepted.

Outgoing messages

• detected_command [VoiceCommandDetectionProto]: Publish the detected command id and list of timestamps of the contributing keywords.

Parameters

• command_list [std::vector<std::string>] [default=]: User defined command list
• command_ids [std::vector<int>] [default=]: User defined command ids
• max_frames_allowed_after_keyword_detected [int] [default=]: Maximum number of frames to look for a defined command after the trigger keyword is detected
• num_classes [int] [default=]: Model specific params present in metadata Number of classes
• classes [std::vector<std::string>] [default=]: List of classes in same order as that present in model output
• thresholds [std::vector<float>] [default=]: Probability thresholds per class

## isaac.audio.VoiceCommandFeatureExtraction¶

Description

Feature codelet to extract the MFCC and Delta features of the incoming audio packets. These features are used by the ListeNet architecture for Voice Command Detection.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• audio_packets [AudioDataProto]: Receive audio packets to extract features

Outgoing messages

• feature_tensors [TensorListProto]: Tensors of Extracted features

Parameters

• audio_channel_index [int] [default=0]: Index of the channel in multi-channel input data used to detect voice commands.
• minimum_time_between_inferences [float] [default=0.1]: Minimum time between two consecutive inferences
• sample_rate [int] [default=]: Model specific params Sample rate of the audio supported
• fft_length [int] [default=]: Length of Fourier transform window
• num_mels [int] [default=]: Number of mel bins to be extracted
• num_mfcc [int] [default=]: Number of Mel-frequency cepstral coefficients to be computed
• start_coefficient [int] [default=]: Index of the starting cepstral coefficient to be computed
• hop_size [int] [default=]: Stride for consecutive Fourier transform windows
• window_length [int] [default=]: Length of one audio frame which is used for keyword detection. This is the number of time frames after computing STFT with above params
• mean [std::vector<float>] [default=]: Mean feature map constructed from the training dataset
• sigma [std::vector<float>] [default=]: Standard deviation of the feture map

Description

Reads images from file systems and outputs them as messages. This can for example be used to create mock up tests when no camera hardware is available.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• color [ColorCameraProto]: Output the image proto for yolo inference
• depth [DepthCameraProto]: Output the image proto for yolo inference

Parameters

• color_filename [string] [default=”“]: Path of the color image file. The image is expected to be a 3-channel RGB PNG.
• depth_filename [string] [default=”“]: Path of the depth image file. The image is expected to be a 1-channel 16-bit greyscale PNG.
• depth_scale [double] [default=0.001]: A scale parameter to convert 16-bit depth to f32 depth
• distortion_model [string] [default=”brown”]: Image undistortion model. Must be ‘brown’ or ‘fisheye’
• focal_length [Vector2d] [default=]: Focal length in pixels
• optical_center [Vector2d] [default=]: Optical center in pixels
• distortion_coefficients [Vector5d] [default=Vector5d::Zero()]: Distortion coefficients (see the DistortionProto in Camera.capnp for details)
• min_depth [double] [default=0.0]: Minimum depth
• max_depth [double] [default=10.0]: Maximum depth

## isaac.flatsim.DifferentialBasePhysics¶

Description

Runs a very basic physics simulation which moves a differential based by following commands quite literally.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• command [ActuatorGroupProto]: Actuator commands for the wheels of a differential base

Outgoing messages

• bodies [RigidBody3GroupProto]: Resulting physics state of the differential base body

Parameters

• robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
• wheel_acceleration_noise [double] [default=0.03]: Each step a random normal-distributed noise with the given sigma will be added to the desired wheel acceleration. The sigma will be scaled based on the time step and wheel speed.
• wheel_acceleration_noise_decay [double] [default=0.995]: The wheel acceleration noise is additive simulating a random walk. To keep the noise bounded around zero it is multiplied with a decay factor at every timestep.
• slippage_magnitude_range [Vector2d] [default=Vector2d(0.00, 0.05)]: A random friction value is applied which effectively reduces the effect of wheel speed on wheel distance driven. A friction value of 0 zero means full tranmission, while a friction value of 1 means full slippage. Slippage is computed randomly using a uniform distribution with the given minium and maximum value.
• slippage_duration_range [Vector2d] [default=Vector2d(0.50, 1.25)]: The slippage value is maintained constant for a certain duration and then changed to a new value. The duration of the slippage is computed using a uniform distribution with given minimum and maximum value.

## isaac.flatsim.DifferentialBaseSimulator¶

Description

Simulates a differential base by translating base commands into acutator commands, and by publishing base state computed based on rigid body state from the simulator

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• diff_base_command [StateProto]: Input command message with desired body speed
• physics_bodies [RigidBody3GroupProto]: Input state of the base rigid body as computed by physics

Outgoing messages

• physics_actuation [ActuatorGroupProto]: Output actuator message with desired accelerations for each wheel
• diff_base_state [DifferentialBaseStateProto]: Output state of differential base

Parameters

• max_wheel_acceleration [double] [default=10.0]: The maximum acceleration for a wheel
• power [double] [default=0.20]: How fast the base will accelerate towards the desired speed
• flip_left_wheel [bool] [default=false]: If this is enabled the direction of the left wheel will be flipped
• flip_right_wheel [bool] [default=false]: If this is enabled the direction of the right wheel will be flipped
• robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
• joint_name_left_wheel [string] [default=”left_wheel”]: Name of the joint for left wheel
• joint_name_right_wheel [string] [default=”right_wheel”]: Name of the joint for right wheel

## isaac.flatsim.SimRangeScan¶

Description

Simulates a 2D range scan

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• flatscan [FlatscanProto]: Output a FlatScan proto: this is a list of beams (angle + distance)

Parameters

• num_beams [int] [default=360]: The number of beams in the range scan
• min_range [double] [default=0.25]: The minimum range at which obstacles are detected
• max_range [double] [default=50.0]: The maximum range of the simulated LIDAR
• range_sigma_rel [double] [default=0.001]: Standard deviation of relative range error
• range_sigma_abs [double] [default=0.03]: Standard deviation of absolute range error
• beam_invalid_probability [double] [default=0.05]: Probability that a beam will be simulated as invalid
• beam_random_probability [double] [default=0.00001]: Probability that a beam will return a random range
• beam_short_probability [double] [default=0.03]: Probability that a beam will return a smaller range
• map [string] [default=”map”]: Map node to use for tracing range scans
• lidar_frame [string] [default=”lidar”]: Name of the lidar’s frame.

## isaac.hgmm.HgmmPointCloudMatching¶

Description

Calculates ego pose with HGMM (Hierachical Gaussian Mixture Model) from input point cloud. https://research.nvidia.com/publication/2018-09_HGMM-Registration

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• cloud [PointCloudProto]: Takes as input point clouds from sensor like Lidar or Depth Camera
Outgoing messages
(none)

Parameters

• levels [int] [default=2]: The number of levels to build the HGMM tree. This number depends on the complexity of the scene geometry and the number of points in the point clouds. Typically, 2 works well for simple scenes/point clouds, though 3 empirically works better for denser point clouds like velodyne-32 or more. The higher the level, the more accurate the registration, but divergence becomes more probable. (the model overfits and becomes unstable.) 4+ typically reserved for high-fidelity 3d reconstructions, not 6 dof registration
• convergence_threshold [float] [default=0.001]: The lower, the longer the algorithm takes to converge, but performance becomes better. 0.01: fast to converge but worse accuracy 0.001-0.0001: slow to converge but often better accuracy
• max_iterations [int] [default=30]: Max iterations regardless of convergence. Most problems take on the order of 10-35 iterations per level for normal convergence tolerance ranges.
• noise_floor [float] [default=0.000]: TODO Noise parameter (currently turned off). Used if data contains extreme outliers. In the meantime, basic filtering of input needs to be performed outside of HGMM model creation and registration
• regularization [float] [default=0.01]: Regularization to prevent singularities and overfitting If solution is diverging, parameter is too low. 0.0001: highly accurate but often unstable 0.001: highly accurate but possible divergence 0.01: robust convergence but higher error 0.1: very robust but possibly biased result
• axis_length [double] [default=1.0]: Ego frame axis length
• skip [int] [default=51]: Skipping points to reduce overload of visualization
• history_size [int] [default=10]: Keeps past several history point clouds for visualization
• max_distance [double] [default=10.0]: Visualizes no points beyond the distance

## isaac.imu.IioBmi160¶

Description

Interface for an IMU BMI160 IIO device This class sets up the IMU device (accel + gyro), and publishes IMU data.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• imu_raw [ImuProto]: ImuProto is used to publish IMU data read from the buffer

Parameters

• i2c_device_id [int] [default=1]: I2C device ID: matches ID of /dev/i2c-X
• imu_T_imu [SO3d] [default=SO3d::FromAxisAngle(Vector3d{1, 0, 0}, Pi<double>)]: IMU Mounting Pose In the base case, the IMU is mounted on it’s back. Rotate 180 degrees about X-axis (flip Y and Z axes)

## isaac.imu.ImuCalibration2D¶

Description

Codelet to perform Imu Calibration Provides access to the imu calibration library Creates (or updates) calibration file

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• imu [ImuProto]: Imu Data
Outgoing messages
(none)

Parameters

• imu_calibration_file [string] [default=”imu_calibration.out.json”]: path to output calibration file. This file will be created if it does not exist and overwritten if it exists.
• imu_variance_stationary [double] [default=0.2]: Threshold for stationary variance
• imu_window_length [int] [default=100]: Number of samples in window

## isaac.imu.ImuCorrector¶

Description

Receives raw IMU data and removes biases either by using calibration data that is supplied or calibrating itself in the beginning.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• raw [ImuProto]: Receive raw IMU data

Outgoing messages

• corrected [ImuProto]: Publish corrected IMU data

Parameters

• calibration_file [string] [default=]: Optional calibration file. If a calibration file is provided, biases from the file will be removed from the IMU data. Otherwise we will calibrate in the beginning.
• calibration_variance_stationary [double] [default=0.1]: Stationary variance for calibration
• calibration_window_length [int] [default=100]: Number of samples in window for calibration

## isaac.imu.ImuSim¶

Description

This codelet manages a single IMU sensor in the simulator Each IMU is associated to the robot by a user defined 3D transformation (Pose3D) User can provide biases, noises and optional calibration file as parameters

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• bodies [RigidBody3GroupProto]: Input states of the rigid bodies as computed by physics engine

Outgoing messages

• imu_raw [ImuProto]: Imu proto is used to publish raw Imu data received from simulator

Parameters

• robot_name [string] [default=”carter_1”]: The parent actor for imu is the robot This param is required and should match the config file for the sim
• robot_T_imu [Pose3d] [default=Pose3d::Identity()]: The Imu is always associated with a parent robot Gravity vector is initilized orthogonal to X-Y plane of robot, pointing down (-Z)
• imu_name [string] [default=”imu_1”]: Name of the IMU rigid body This param is required and should match the config file for the sim
• sampling_rate [double] [default=30.0]: Imu specific parameters Sampling Frequency TODO: increase sampling frequency. This is currently set to 30 Hz Limited by sim
• accel_bias [Vector3d] [default=Vector3d::Zero()]: Accelerometer Bias
• accel_noise [Vector3d] [default=Vector3d::Zero()]: Accelerometer (zero mean) noise std dev
• gyro_bias [Vector3d] [default=Vector3d::Zero()]: Gyroscope Bias
• gyro_noise [Vector3d] [default=Vector3d::Zero()]: Gyroscope (zero mean) noise std dev

## isaac.kinova_jaco.KinovaJaco¶

Description

A class to receive command and publish state information for the Kinova Jaco arm.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• cartesian_pose_command [StateProto]: Command for end effector position and orientation
• joint_velocity_command [StateProto]: Command for angular velocities for joints

Outgoing messages

• cartesian_pose [StateProto]: Current position and orientation of end effector
• joint_position [StateProto]: Current angle, in Radians, for each joint (7-dof)
• joint_velocity [StateProto]: Current angular velocity, in Radians/sec, for each joint (7-dof)
• finger_position [StateProto]: Current position for each finger

Parameters

• kinova_jaco_sdk_path [string] [default=]: Path to JacoSDK is set in jaco_driver_config.json. Driver is tested for use with JACO2SDK v1.4.2 Jaco SDK source: https://drive.google.com/file/d/17_jLW5EWX9j3aY3NGiBps7r77U2L64S_/view
• control_mode [ControlMode] [default=kCartesianPose]: Set control mode for arm. Can only accept commands corresponding to the current mode.

## isaac.map.Map¶

Description

This component is used to mark a node as a map and gives convenient access to the various map layers and also some cross-layer functionality.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• graph_file_name [string] [default=”“]: Filename under which to store the current graph whenever there is an update to the map.
• config_file_name [string] [default=”“]: Filename under which to store the current configuration whenever there is an update to the map.

## isaac.map.MapBridge¶

Description

A bridge for communication between map container and WebsightServer

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• request [nlohmann::json]: Request to the MapBridge

Outgoing messages

• reply [nlohmann::json]: Reply from the MapBridge
Parameters
(none)

## isaac.map.OccupancyGridMapLayer¶

Description

A grid map layer for a map node. It provides access to an occupancy grid map which stores for each cell whether the cell is blocked or free. It also holds a distance map computed based on occupancy grid map which contains the distance to the nearest obstacle for each cell computed based on a given threshold.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• filename [string] [default=]: Filename of greyscale PNG which will be loaded as the occupancy grid map
• cell_size [double] [default=]: Size of one map pixel in meters
• threshold [double] [default=0.4]: Threshold used to compute the distance map. Cells with a value larger than this threshold are assumed to be blocked.

## isaac.map.PolygonMapLayer¶

Description

A map layer which holds annotated polygons and provides various methods to access them

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• polygons [json] [default=nlohmann::json::object()]: A json object from configuration containing the polygons. Layout: { “poly1”: { “points”: [[<polygon point1>], [<polygon point2>]], }, }
• color [Vector3i] [default=(Vector3i{255, 0, 0})]: Layer color.

## isaac.map.WaypointMapLayer¶

Description

A map layer which holds annotated waypoints and provides various methods to access them

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• waypoints [json] [default=nlohmann::json::object()]: A json object from configuration containing the waypoints. Layout: { “wp1”: { “pose”: [1,0,0,0,0,0,0], “radius”: 0.5 }, “wp3”: { “pose”: [1,0,0,0,0.1,-1.2,0], “color”: [54.0, 127.0, 255.0] } }

## isaac.ml.ColorCameraEncoder¶

Description

ColorCameraEncoder encodes images for input into the object segmentation neural network.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• rgb_image [ColorCameraProto]: Input RGB color image

Outgoing messages

• tensor [TensorListProto]: The sample will be a list of one tensor. It will represent the RGB image with each pixel normalized in the range [-1, 1].

Parameters

• rows [int] [default=960]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the height dimension of the downsampled image.
• cols [int] [default=540]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the width dimension of the downsampled image.
• pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kNone]: Type of Normalization to be performed.

## isaac.ml.DetectionDecoder¶

Description

DetectionDecoder converts a TensorListProto of object detection values to a Detections2Proto type. This codelet has the inverse functionality of the DetectionEncoder codelet. Each detection in the TensorList is represented in the format {bounding_box<x1, y1, x2, y2>, objectness, probability(class1, class2.. class<i>… class<N>)}. bounding_box represents the minimum and maximum coordinates of the bounding box. Predicted objectness is the confidence score which reflects how likely the box contains an object. probability(class>i) is the probability that the detected object belongs to a particular class i, where i is in the range 1-total number of classes N.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_detection_tensors [TensorListProto]: Input detection proto in the format proto[0]- Bounding Box Parameters which include {{bounding_box1{x1, y1, x2, y2}, objectness, {probability0, probability1,…probability<N>}}, {bounding_box2{x1, y1, x2, y2}, objectness, {probability0, probability1… probability<N>}} …..{bounding_box<K>{x1, y1, x2, y2}, objectness, {probability0, probability1, probability2…. probability<N>}}} N : Number of classes the network is trained on K : Number of Bounding Boxes predicted For a network, the expected number of bounding boxes generated in each output layer is : (grid_size * grid_size) * num_bboxes_per_grid where grid_size : network_height/output_layer_stride output_layer_stride : ratio at which the layer downscales the input num_bboxes_per_grid : Number of bounding box predictions per grid bounding_box<K> - Minimum and maximum (x, y) coordinates objectness - confidence of whether an object is present in the bounding box probability<N> : Confidence that the object present in the bounding box belongs to class<N> The number of classes is a parameter defined in the network_config proto(proto[1]) Example: For a yolov3-tiny network trained on N classes with network dimension 416x416 with 2 output layers, the expected number of bounding boxes generated is : (grid_size1*grid_size1 + grid_size2*grid_size2) * num_bboxes_per_grid yolov3-tiny has 2 output layers of stride 32 and 16, Therefore the corresponding gridsizes will be grid_size1 = 416/32 = 13 and grid_size2 = 416/16 = 26 and num_bboxes_per_grid = 3 Therefore num_bboxes = (13*13 + 26*26)*3 = 2535 The output tensor will be an array representing the 2535 detections where each detection format will be tensor<0-4> : {x1, y1, x2, y2} represents the minimum and maximum bounding box coordinates tensor<5> : objectness : the confidence that an object exists in the bounding box tensor<6 - 6 + N> : p1, … p<N> : confidence that the detected object belongs to class index i in the range (1-N). proto[1] - Network config parameters which include {network_width, network_height, image_width, image_height, number_classes trained on, number of parameters for each bounding box{excluding probability of classes} Example: For a yolov3-tiny network trained on 6 classes with network height 416x416 which runs inference on an image size 1280x720 where the number of parameters if 5 which includes 4 bounding boxes and objectness score, the tensor representing network config includes tensor(0-5) = {416, 416, 1280, 720, 6, 5}

Outgoing messages

• detections [Detections2Proto]: Output detections with bounding box, label, and confidence Poses will not be populated here.

Parameters

• nms_threshold [double] [default=0.6]: NonMaximum Supression threshold
• confidence_threshold [double] [default=0.6]: Confidence threshold of the detection
• labels_file_path [string] [default=”labels.txt”]: Path of the labels file with the name of classes trained by the network Every line of the labels file corresponds to the class name

## isaac.ml.DetectionEncoder¶

Description

Encodes detection for input into the object detection neural network.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• detection [Detections2Proto]: Input detection proto.

Outgoing messages

• tensor [TensorListProto]: The tensor will contain detection labels of size N x 5 where N is the number of bounding boxes in a single image

Parameters

• class_names [json] [default={}]: The class names of our detection objects.
• area_threshold [double] [default=10.0]: The minimum area of bounding boxes

## isaac.ml.HeatmapDecoder¶

Description

Converts a tensor representing heatmap values to a HeatmapProto type This codelet has the inverse functionality of the HeatmapEncoder codelet Please refer HeatmapEncoder.hpp for details on the message types

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• tensor [TensorProto]: Input tensor containing heatmap of probabilities

Outgoing messages

• heatmap [HeatmapProto]: Output heatmap proto

Parameters

• grid_cell_size [double] [default=2.0]: Cell size (in metres) of every pixel in heatmap
• map_frame [string] [default=”world”]: The pose map frame for the heatmap

## isaac.ml.HeatmapEncoder¶

Description

Converts a heatmap of type HeatmapProto to a TensorList. HeatmapProto message type consists of a heatmap image, the name of the map frame pose and the cell size (in meters) that each heatmap pixel represents. Output TensorList can be used as input to downstream nodes, such as TensorSynchronization.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• heatmap_proto [HeatmapProto]: Input heatmap proto containing heatmap image of probabilities

Outgoing messages

• heatmap_tensor [TensorListProto]: Output tensor list proto, which can be fed to tensor synchronizer
Parameters
(none)

## isaac.ml.SampleAccumulator¶

Description

Collects training samples and makes them available for machine learning

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• samples [TensorListProto]: Incoming samples
Outgoing messages
(none)

Parameters

• sample_buffer_size [int] [default=256]: Number of training samples to keep in the buffer

## isaac.ml.SegmentationDecoder¶

Description

Convert tensor to segmentation prediction

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• tensors [TensorListProto]: The input tensor contains semantic segmentation label prediction where each pixel has a probability distribution over all classes (WxHxN where N is number of classes)

Outgoing messages

• segmentation_prediction [SegmentationPredictionProto]: Output segmentation prediction proto which contains the class information

Parameters

• class_names [json] [default={}]: name of the classes in an array. Each class is represented by a string. The number of classes must match the nmumber of classes in the tensor input.

## isaac.ml.SegmentationEncoder¶

Description

Encodes segmentation for input into the object segmentation neural network.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• segmentation [SegmentationCameraProto]: Input segmentation image.

Outgoing messages

• tensor [TensorListProto]: The tensor will contain semantic segmentation labels where a pixel is 1 if it is part of the object and 0 otherwise.

Parameters

• target_object_label [string] [default=]: The label set by the simulation of which pixels represent the object.

## isaac.ml.Teleportation¶

Description

Teleportation is a class that generates random poses and sends them to an actor group codelet.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• relative_frame [Pose3dProto]: Proto used to receive a reference frame (pose)

Outgoing messages

• rigid_command [RigidBody3GroupProto]: Proto used to publish rigid body pose to the sim bridge
• relative_frame_cmd [Pose3dProto]: Proto used to publish rigid body pose to another teleportation codelet as a reference frame

Parameters

• min [Vector3d] [default=Vector3d::Zero()]: Minimum translation in X, Y, Z coordinates
• max [Vector3d] [default=Vector3d(1.0, 1.0, 1.0)]: Maximum translation in X, Y, Z coordinates
• min_roll [double] [default=0.0]: Minimum roll change after a teleoperation
• max_roll [double] [default=TwoPi<double>]: Maximum roll change after a teleoperation
• min_pitch [double] [default=0.0]: Minimum pitch change after a teleoperation
• max_pitch [double] [default=TwoPi<double>]: Maximum pitch change after a teleoperation
• min_yaw [double] [default=0.0]: Minimum yaw change after a teleoperation
• max_yaw [double] [default=TwoPi<double>]: Minimum yaw change after a teleoperation
• min_scale [double] [default=0.0]: Mimimum multiplicational scale factor of corresponding objects in simulation
• max_scale [double] [default=1.0]: Maximum multiplicational scale factor of corresponding objects in simulation
• name [string] [default=”“]: Name of actor to teleport
• enable_translation_x [bool] [default=true]: Flag to enable translation (X)
• enable_translation_y [bool] [default=true]: Flag to enable translation (Y)
• enable_translation_z [bool] [default=true]: Flag to enable translation (Z)
• enable_roll [bool] [default=false]: Flag to enable rotation (roll)
• enable_pitch [bool] [default=false]: Flag to enable rotation (pitch)
• enable_yaw [bool] [default=false]: Flag to enable rotation (yaw)
• enable_scale [bool] [default=false]: Flag to enable scale
• enable_on_relative_frame [bool] [default=false]: Flag to tick on relative frame message

## isaac.ml.TensorReshape¶

Description

Reshape each tensor in a tensorlist to desired dimensions.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_tensors [TensorListProto]: Input the list of tensors that will need to be reshaped using output_tensors_dimension parameters

Outgoing messages

• output_tensors [TensorListProto]: Output the list of tensors that after being reshaped using output_tensors_dimension parameters

Parameters

• output_tensors_dimension [json] [default={}]: tensor shape information for each tensor in the list. It must be an array of arrays where the number of arrays must equal to the number of tensors in input_tensors.

## isaac.ml.TensorSynchronization¶

Description

Synchronize upto four tensors and output a list of the two tensors as one tensorlist proto message. The codelet only publishes when the inputs are valid

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• tensor1 [TensorListProto]: Input tensor lists (channel 1)
• tensor2 [TensorListProto]: Input tensor lists (channel 2)

Outgoing messages

• tensorlist [TensorListProto]: Output tensor list

Parameters

• tensor_count [int] [default=2]: Number of tensors to synchronize

## isaac.ml.TensorflowInference¶

Description

This Codelet loads trained Tensorflow model along with runtime and runs inference with the model. It is intended to help faster prototyping of Machine Learning techniques in robotics.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_tensors [TensorListProto]: Input tensor data as input of the inference model

Outgoing messages

• output_tensors [TensorListProto]: Output tensor data as output of the inference model

Parameters

• input_tensor_info [json] [default={}]: Input Tensor Information in JSON, example: [ { “ops_name”: “input”, “index”: 0, “dims”: [1, 224, 224, 3] } ]
• output_tensor_info [json] [default={}]: Output Tensor Information in JSON
• model_file_path [string] [default=]: Model_data with contents from specified file
• config_file_path [string] [default=]: Config_data with contents from specified file

Description

Converts a cost map into a binary map based on thresholds and computes a distance map from it. The resulting distance map is added as an obstacle into an linked ObstacleWorld component.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• binary_map [BinaryMapProto]: Incoming binary map which will be converted to distance map

Outgoing messages

• distance_map [DistanceMapProto]: Outgoing distance map which indicates the distance to nearest obstacles for every map cell

Parameters

• max_distance [double] [default=10.0]: The maximum distance used for the distance map (in meters)
• blur_factor [int] [default=0]: If set to a value greater than 0 the distance map will be blurred with a Gaussian kernel of the specified size.
• compute_distance_inside [bool] [default=false]: If enabled the distance map will also be included inside obstacles. The distance is negative and measures the distance to the obstacle boundary. Otherwise the distance inside obstacles will be 0.
• distance_map_quality [int] [default=2]: Specifies the desired quality of the distance map. Possible values are: 0: Uses the QuickDistanceMapApproximated algorithm which is fast but produces artefacts 1: Uses QuickDistanceMap with queue length of 25 2: Uses QuickDistanceMap with queue length of 100 3: Uses DistanceMap which computes an accurate distance map but is quite slow
• obstacle_world_component [string] [default=”obstacle_world/obstacle_world”]: Link to an ObstacleWorld component to which the distance map will be added as an obstacle.
• obstacle_name [string] [default=”local_map”]: Name used to register the map into the obstacle_world component.

Description

Runs cartographer (https://google-cartographer.readthedocs.io/en/latest/) to create a 2D map from lidar-data.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: Consumes flatten-ed Lidar scan data
Outgoing messages
(none)

Parameters

• lua_configuration_directory [string] [default=”“]: Folders to search for lua scripts, separated by comma
• lua_configuration_basename [string] [default=”“]: File name of the specific lua script to load
• output_path [string] [default=”/tmp”]: Folder to write submaps and generated map
• background_size [Vector2i] [default=Vector2i(1500, 1500)]: Visualization Settings Canvas background size: (rows, cols)
• background_translation [Vector2d] [default=Vector2d(-75, -75)]: Translation to apply on back ground image.
• num_visible_submaps [int] [default=8]: Visualizes only latest submaps to save cpu

Description

Takes detections with bounding boxes in pixel coordinates and projects them into robot coordinates to output poses relative to the robot frame. For a point of interest in camera image, we can get a 3D translation relative to the camera frame using (1) camera intrinsics, (2) depth information, and (3) location on the image. The question is which location to use. For each detection, we have a bounding box. Naive approach would be to pick only the center location. For robustness, we generalize this idea below. 1. For each detection, we would like to focus around the center of bounding box, because every pixel of bounding box is not going to belong to the object of interest. 2. We get the region of interest (ROI) by shrinking bounding box using roi_scale. 3. Around each of the 4 corners of ROI, we create a small bounding box called unprojection_area. 4. We take average of points (represented in the camera frame) for every pixel of the 4 unprojection_areas to get our final estimate.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• depth_image [DepthCameraProto]: Input depth image to use to find real-world coordinates of bounding boxes
• detections [Detections2Proto]: Bounding box in pixel coordinates and class label of objects in an image

Outgoing messages

• detections_with_poses [Detections3Proto]: Output list of detections with their 3D poses populated by this codelet

Parameters

• roi_scale [double] [default=0.25]: Scale factor for getting the region of interest (ROI) from detection bounding box. Please see codelet summary above for details.
• spread [Vector2i] [default=Vector2i(10, 10)]: In pixels, half dimensions of the unprojection_areas in row and column. Please see codelet summary above for details.
• invalid_depth_threshold [double] [default=0.05]: Depth values smaller than this value are considered to be invalid.

Description

Integrates (2D) odometry for a differential base to estimate it’s ego motion.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• state [DifferentialBaseStateProto]: Incoming current dynamic state of the differential base which is used to estimate it’s ego motion in an odometry frame.

Outgoing messages

• odometry [Odometry2Proto]: Outgoing ego motion estimate for the differential base.

Parameters

• max_acceleration [double] [default=5.0]: Maximum acceleration to use (helps with noisy data or wrong data from simulation)
• odometry_frame [string] [default=”odom”]: The name of the source coordinate frame under which to publish the pose estimate.
• robot_frame [string] [default=”robot”]: The name of the target coordinate frame under which to publish the pose estimate.
• prediction_noise_stddev [Vector6d] [default=(MakeVector<double, 6>({0.05, 0.05, 0.35, 0.05, 1.00, 3.00}))]: 1 sigma of noise used for prediction model in the following order: pos_x, pos_y, heading, speed, angular_speed, acceleration
• observation_noise_stddev [Vector3d] [default=(Vector3d{0.25, 0.45, 2.0})]: 1 sigma of noise used for observation model in the following order: speed, angular_speed, acceleration

Description

Integrates (2D) odometry for a differential base to estimate its ego motion.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• state [DifferentialBaseStateProto]: Incoming current dynamic state of the differential base which is used to estimate its ego motion in an odometry frame.
• imu [ImuProto]: Optional measurement input from IMU for better accuracy

Outgoing messages

• odometry [Odometry2Proto]: Outgoing ego motion estimate for the differential base.

Parameters

• max_acceleration [double] [default=5.0]: Maximum acceleration to use (helps with noisy data or wrong data from simulation)
• odometry_frame [string] [default=”odom”]: The name of the source coordinate frame under which to publish the pose estimate.
• robot_frame [string] [default=”robot”]: The name of the target coordinate frame under which to publish the pose estimate.
• prediction_noise_stddev [Vector6d] [default=(MakeVector<double, 6>({0.05, 0.05, 0.35, 0.05, 1.00, 3.00}))]: 1 sigma of noise used for prediction model in the following order: pos_x, pos_y, heading, speed, angular_speed, acceleration
• observation_noise_stddev [Vector3d] [default=(Vector3d{0.25, 0.45, 2.0})]: 1 sigma of noise used for observation model in the following order: speed, angular_speed, acceleration
• use_imu [bool] [default=true]: Enables/Disables the use of IMU
• weight_imu_angular_speed [double] [default=1.0]: Determines the trust in IMU while making angular speed observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average
• weight_imu_acceleration [double] [default=1.0]: Determines the trust in IMU while making linear acceleration observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average

Description

Visualizes a flatscan at the estimated position.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: Incoming range scan used to localize the robot
Outgoing messages
(none)

Parameters

• beam_skip [int] [default=4]: The number of beams which are skipped for visualization
• map [string] [default=”map”]: Map node to use for localization
• range_scan_model [string] [default=”shared_robot_model”]: Name of the robot model node
• flatscan_frame [string] [default=”lidar”]: Frame which flatscan is defined at

Description

Receives a sequence of waypoints via a message and drives the robot from one waypoint to the next. This can be used for example in combination with the GoTo component.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• plan [Plan2Proto]: The path on which the robot should drive
• feedback [Goal2FeedbackProto]: Feedback about where we are with respect to the goal

Outgoing messages

• goal [Goal2Proto]: The desired goal waypoint

Parameters

• goal_frame [string] [default=]: The name of the frame in which we the goal will be published
• stationary_wait_time [double] [default=5.0]: Seconds to wait before moving on to next waypoint if robot becomes stationary
• wait_time [double] [default=1.0]: Seconds to wait after arriving at a waypoint
• loop [bool] [default=false]: If set to true we will repeat following the path

Description

The GoTo class receives a goal pose from one of the goal generators, and then sends feedback regarding the status of the robot, e.g., whether the robot has arrived at the target or whether the robot is stationary.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• goal_in [Goal2Proto]: The target destination received

Outgoing messages

• goal_out [Goal2Proto]: Output goal for the robot
• feedback [Goal2FeedbackProto]: Feedback about the last received goal

Parameters

• arrived_position_thresholds [Vector2d] [default=Vector2d(0.5, DegToRad(15.0))]: Threshold on position to determine if the robot has arrived (positional and angular)
• stationary_speed_thresholds [Vector2d] [default=Vector2d(0.025, DegToRad(5.0))]: Threshold on speed to determine if the robot is stationary (positional and angular)
• var_rx_speed_pos [string] [default=]: Variable indicating linear speed
• var_rx_speed_rot [string] [default=]: Variable indicating angular speed
• var_tx_remaining_delta_pos [string] [default=”remaining_delta_pos”]: Variable for specifiying linear distance remaining to target
• var_tx_remaining_delta_rot [string] [default=”remaining_delta_rot”]: Variable for specifiying angular distance remaining to target
• var_tx_current_linear_speed [string] [default=”current_linear_speed”]: Variable for specifying current linear speed
• var_tx_current_angular_speed [string] [default=”current_angular_speed”]: Variable for specifying current angular speed
• var_tx_has_arrived [string] [default=”has_arrived”]: Variable for specifying whether we arrived at the target
• var_tx_is_stationary [string] [default=”is_stationary”]: Variable to write whether the robot is stationary

Description

A flatscan localization method using a gradient descent algorithm. This codelet uses a flatscan to localize the robot in a known map. As this is a local optimization technique an initial guess is necessary. The computed pose of the scanner and thus the robot are written to the pose tree. This method is quite stable compared to the more noisy particle-filter based approach. However it is a uni-modal technique which can not deal well with ambiguity.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: Range scan used to localize the robot
Outgoing messages
(none)

Parameters

• map [string] [default=”map”]: Name of map node to use for localization

Description

An exhaustive grid search localizer. Based on a flat range scan every possible pose in a map is checked for the likelihood that the scan was taken at that pose. The pose with the best match is written to the pose tree as a result. This node uses a simplified and customized range scan model to increase the performance of the algorithm. The algorithm currently only works for a 360 degree range scan with constant angular resolution. This component uses a GPU-accelerated algorithm. Depending on the map size and the GPU the runtime of the algorithm might range from less than a second to multiple seconds.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: The current sensor measurement based on which we try to localize in the map
Outgoing messages
(none)

Parameters

• robot_radius [double] [default=0.25]: The radius of the robot. This parameter is used to exclude poses which are too close to an obstacle.
• max_beam_error [double] [default=0.50]: The maximum beam error used when comparing range scans.
• num_beams_gpu [int] [default=256]: The GPU accelerated scan-and-match function can only handle a certain number of beams per range scan. The allowed values are {32, 64, 128, 256, 512}. If the number of beams in the range scan does not match this number a subset of beams will be taken.
• batch_size [int] [default=512]: This is the number of scans to collect into a batch for the GPU kernel. Choose a value which matches your GPU well.
• sample_distance [double] [default=0.1]: Distance between sample points in meters. The smaller this number, the more sample poses will be considered. This leads to a higher accuracy and lower performance.
• map [string] [default=”map”]: Name of map node to use for localization

Description

Integrates (2D) odometry for a holonomic base to estimate its ego motion.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• state [StateProto]: Incoming current dynamic state of the holonomic base which is used to estimate its ego motion in an odometry frame.
• imu [ImuProto]: Optional measurement input from IMU for better accuracy

Outgoing messages

• odometry [Odometry2Proto]: Outgoing ego motion estimate for the holonomic base.

Parameters

• max_acceleration [double] [default=5.0]: Maximum acceleration to use (helps with noisy data or wrong data from simulation)
• odometry_frame [string] [default=”odom”]: The name of the source coordinate frame under which to publish the pose estimate.
• robot_frame [string] [default=”robot”]: The name of the target coordinate frame under which to publish the pose estimate.
• prediction_noise_stddev [Vector8d] [default=(MakeVector<double, 8>({0.05, 0.05, 0.35, 0.05, 0.05, 1.00, 3.00, 3.00}))]: 1 sigma of noise used for prediction model in the following order: pos_x, pos_y, heading, speed_x, speed_y, angular_speed, acceleration_x, acceleration_y
• observation_noise_stddev [Vector5d] [default=(MakeVector<double, 5>({0.25, 0.25, 0.45, 2.0, 2.0}))]: 1 sigma of noise used for observation model in the following order: speed_x, speed_y, angular_speed, acceleration_x, acceleration_y
• use_imu [bool] [default=true]: Enables/Disables the use of IMU
• weight_imu_angular_speed [double] [default=1.0]: Determines the trust in IMU while making angular speed observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average
• weight_imu_acceleration [double] [default=1.0]: Determines the trust in IMU while making linear acceleration observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average

Description

Creates and maintains a dynamic obstacle grid map centered around the robot. The dynamic grid map is always relative to the robot with the robot at a fixed location in the upper part of the robot. The previous state of the grid map is continuously propagated into the presence using the robot odometry. Good odometry is critical to maintaining a sharp, high-quality grid map. New flatscan measurements are integrated into the local map and mixed with the current local map accumulated based on the past. The local map “forgets” information over time to allow gradual dynamic updates. This enables it to be useful in the presence of dynamic obstacles. However thresholding might be challenging and additional object detection and tracking should be used for dynamic obstacles.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: The gridmap is created based on flat range scans.

Outgoing messages

• local_map [OccupancyMapProto]: The latest dynamic obstacle grid map

Parameters

• cell_size [double] [default=0.05]: Size of a cell in the dynamic grid map in meters
• dimensions [Vector2i] [default=Vector2i(256, 256)]: The dimensions of the grid map in pixels
• map_offset_relative [Vector2d] [default=Vector2d(-0.25, -0.5)]: Local offset of robot relative to the map relative to the total map size.
• map_decay_factor [double] [default=0.98]: Before integrating a new range scan the current map is decayed with this factor. The lower this parameter the more forgetful and uncertain the local map will be.
• visible_map_decay_factor [double] [default=0.92]: Cells which were observed have an additional decay to better deal with moving obstacles. This allows a different forgetfullness for cells which are currently visible.
• wall_thickness [double] [default=0.20]: When integrating a flatscan an area of the given thickness behind a hit is marked as solid. This value should be at least in the order of the chosen cell size.
• clear_radius [int] [default=5]: A small rectangular area around the robot with this radius is always marked as free to prevent the robot from seeing itself. If this value is too big nearby obstacle might be ignored.
• flatscan_frame [string] [default=”lidar”]: The name of the reference frame in which range scans arriving on the flatscan channel are defined.
• localmap_frame [string] [default=”localmap”]: The name of the map coordinate frame. This will be used to write the pose of the map relative to the robot in the pose tree.

Description

Measures scan localization performance by evaluating tracked robot pose against ground truth

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)
Parameters
(none)

Description

A basic behavior which tries to keep the robot localized. It uses a global localizer to initially find the robot location. The result is given to a local localizer which tracks the robot pose over time. The planner is only run when the robot is localized.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• global_rmse_threshold [double] [default=1.0]: If the RMSE of the global localizer falls below this threshold it is assumed to be localized.
• global_min_progress [double] [default=0.75]: Minimum progress of the global localizer before we start considering the error threshold
• local_score_threshold [double] [default=0.0]: If the score of the local localizer falls below this threshold it is assumed to be lost.

Description

Selects a waypoint from a map and publishes it as a goal

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• desired_waypoint [GoalWaypointProto]: Receives the desired waypoint

Outgoing messages

• goal [Goal2Proto]: Output goal for the robot

Parameters

• map [string] [default=”map”]: Map node for looking up waypoints
• waypoint [string] [default=”“]: The waypoint which is published as the goal. If empty the current pose will be published.

Description

Simulates moving to a desired map waypoint. The status of the movement will be published as variables.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• desired_waypoint [GoalWaypointProto]: Receives the desired waypoint. An empty string as waypoint name will be interpreted as stop.
Outgoing messages
(none)

Parameters

• waypoint_map_layer [string] [default=”map/waypoints”]: Map node for looking up waypoints. If the target waypoint is not inside this may layer the simulated motion will stop.
• average_distance [double] [default=5.0]: The average distance between waypoints
• max_speed [double] [default=1.0]: The maximum traveling speed of the agent

Description

Takes a set of ordered waypoint poses as an input plan. The orientation of each pose is used as starting point for the rotations. Expands the plan to include multiple orientations for each 2D position. This means that the robot can make one complete rotation for each 2D position, if it follows the plan in order.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• waypoints [Plan2Proto]: Input waypoint denoting waypoint poses.

Outgoing messages

• waypoints_with_orientations [Plan2Proto]: Output waypoint plan along with multiple angles of orientation

Parameters

• num_directions [int] [default=4]: Number of angles to turn the robot

Description

A map layer for a navigation map. Holds various conveniences functions for quick access of map data for navigation tasks. This layer can work with a multi-floor map which holds multiple layers for the same type for the different floors of a building. If multi-floor mode is enabled map layers are stored as prefix_n where prefix is the base name of the layer and n is the floor index. The first floor has the index 0, the second floor the index 1, etc. If multi-floor mode is disabled by setting num_floors to 0 only the prefix will be used to access the single layer of that type.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• num_floors [int] [default=0]: The number of floors in this map
• occupancy_grid_prefix [string] [default=”occupancy”]: The name prefix used for occupancy grid map layers
• waypoint_prefix [string] [default=”waypoints”]: The name prefix used for waypoint map layers
• restricted_area_prefix [string] [default=”restricted_area”]: The name prefix used for keep clear area map layers
• global_localization_area_prefix [string] [default=”localization_area”]: The name prefix used for global localization area map layers

Description

Collects the robot state (current pose, current speed and displacement since last update) at every tick. Publishes the state if the dispacement is greater than a user defined threshold.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• camera [ColorCameraProto]: Camera Input This is needed in order to publish the robot state with the acquisition time of the input image

Outgoing messages

• robot_state [RobotStateProto]: Proto used to publish the robot’s state (position, speed and siplacement since last update)

Parameters

• tick_periodically [bool] [default=true]: Boolean to determine if we need to tick periodically. During periodic ticks, we can check displacement once every interval and publish the output with current time as acquisition time. If we tick on message instead, the output can be published with the acquision time of the input message.
• angle_threshold [double] [default=DegToRad(15.0)]: Angle in radians that the robot needs to move before publishing
• distance_threshold [double] [default=0.5]: Distance in metres robot needs to move before publishing
• var_rx_speed_pos [string] [default=]: Linear speed as set by DifferentialBaseOdometry
• var_rx_speed_rot [string] [default=]: Angular speed as set by DifferentialBaseOdometry

Description

A component which holds a virtual representation of obstacles detected around the robot. Currently distance maps and spherical obstacles are available. This component is thread safe and can be accessed from other components without message passing.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)
Parameters
(none)

Description

Converts an occupancy map into a binary map based on thresholds.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• occupancy_map [OccupancyMapProto]: Incoming occupancy map which will be converted and stored

Outgoing messages

• binary_map [BinaryMapProto]: Computed binary map

Parameters

• mean_threshold [int] [default=128]: Grid cells in the cost map which have a mean value greater than this threshold are considered to be blocked.
• stdandard_deviation_threshold [int] [default=128]: Grid cells in the cost map which have a standard deviation greater than this threshold are considered to be uncertain.
• is_optimistic [bool] [default=false]: If enabled uncertain cells will be treated as free, otherwise they are considered to be blocked.

Description

Localizes the robot in a given map based on a flat range scan. A Baysian filter based on a particle filter is used to keep track of a multi-modal hypothesis distribution. For every tick the particle distribution is updated based on an ego motion estimate read from the pose tree. Particles are then evaluated against the measured range scan using a range scan model to compute new particle scores. Particles with the highest score are combined in a weighted averaged to compute the new best estimate of the robot pose. The robot pose is written into the pose tree as a result. Range scans are compared using a range scan model. In order for this node to work properly a component which is derived from RangeScanModel needs to be created and referenced in the parameter. Particles are initialized in the start function using an initial estimate of the robot pose which is read from the pose tree. The GridSearchLocalizer component can for example be used to provide this initial estimate. Alternatively the initial pose could also be provided using a PoseInitializer component.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: Incoming range scan used to localize the robot
Outgoing messages
(none)

Parameters

• num_particles [int] [default=75]: The number of particles used in the particle filter
• absolute_predict_sigma [Vector3d] [default=Vector3d(0.04, 0.04, DegToRad(5.0))]: Standard deviation of Gaussian noise “added” to the estimated pose during the predict step of the particle filter. This value is a rate per second and will be scaled by the time step. The used equation is of the form: current_position += Gaussian(0, sqrt(dt) * sigma); Note the use of sqrt(dt) for scaling the standard deviation which is required when summing up Normal distributions. The vector contains three parameters: 1) noise along the forward direction (X axis) 2) noise along the sidwards direction (Y axis) 3) noise for the rotation
• relative_predict_sigma [Vector3d] [default=Vector3d(0.10, 0.10, 0.10)]: Standard deviation of Gaussian noise which is applied relative to the current speed of the robot and scaled by the timestep. The used equation is of the form: current_position += Gaussian(0, sqrt(dt) * current_speed * sigma); The vector contains three parameters as explained in absolute_predict_sigma.
• initial_sigma [Vector3d] [default=Vector3d(0.3, 0.3, DegToRad(20.0))]: Standard deviation of Gaussian noise which is applied to the initial pose estimate when the particle filter is (re-)seeded.
• output_best_percentile [double] [default=0.10]: The final pose estimate is computed using the average of the best particles. For example a value of 0.10 would mean that the top 10% of particles with highest scores are used to compute the final estimate.
• reseed_particles [bool] [default=false]: Set to true to request reseeding particles. This will be reset to false when the particle filter was reseeded.
• map [string] [default=”map”]: Node of the map which contains map data. The map is used to compute which range scan would be expected from a hypothetical robot pose.
• range_scan_model [string] [default=”shared_robot_model”]: Name of the node which contains a component of type RangeScanModel which is then used to compare range scans when evaluating particles against a new incoming message.
• flatscan_frame [string] [default=”lidar”]: The name of the reference frame in which range scans arriving on the flatscan channel are defined.

Description

An adaptive localization algorithm using a swarm of particles. A particle swarm algorithm is used to localize the robot based on a single flat range scan. The pose with the best match is written to the pose tree as a result. Consider using GridSearchLocalizer instead as it might provide a better particles to prescision ratio.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: The current sensor measurement based on which we try to localize in the map
Outgoing messages
(none)

Parameters

• num_particles [int] [default=1000]: The number of particles used by PSO
• pso_omega [double] [default=0.5]: Omega parameter of PSO
• pso_phi [Vector3d] [default=(Vector3d{0.05, 0.05, 0.1})]: Phi parameter of PSO (values are for for dx, dy, da)
• pso_phi_p_to_g [double] [default=1.0]: PSO parameter to express ratio between phi_p and phi_g
• map [string] [default=”map”]: Map node to use for localization

Description

The Patrol class selects a waypoint from a map and publishes it as a goal.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• goal [Goal2Proto]: Output goal for the robot

Parameters

• map [string] [default=”map”]: Map node for looking up waypoints
• route [std::vector<std::string>] [default=]: Collection of waypoints to patrol between
• wait_time [double] [default=5.0]: Seconds to wait after arriving at a waypoint
• var_rx_has_arrived [string] [default=]: Variable to read to decide whether we arrived at target
• var_rx_is_stationary [string] [default=]: Variable to read to decide whether robot is stationary

Description

Selects a coordinate frame from the pose tree and publishes it as a goal

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• goal [Goal2Proto]: Output goal for the robot

Parameters

• goal_frame [string] [default=”“]: Name of the goal coordinate frame
• reference_frame [string] [default=”world”]: Name of the reference coordinate frame

Description

Divides given map into user-defined grid sizes. Reads the robot state (position, speed and the displacement since the last update) and determines which grid the robot was in at the time of acquisition of the state.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• robot_state [RobotStateProto]: Input robot state containing position, speed and the displacement since the last update

Outgoing messages

• heatmap [HeatmapProto]: Output HeatmapProto containing heatmap of probabilities, grid cell size and map frame

Parameters

• custom_cell_size [double] [default=2.0]: Desired size of each cell
• kernel_size [int] [default=9]: Size of the gaussian kernel to diffuse weights
• map [string] [default=”map”]: Map node to use for localization

Description

Picks random goals in a map for a robot to navigate to

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• feedback [Goal2FeedbackProto]: Feedback about our progress towards the goal

Outgoing messages

• goal [Goal2Proto]: Output goal for the robot

Parameters

• timeout [double] [default=10.0]: If the robot doesn’t move for this time period it will pick a new goal
• goal_position_threshold [double] [default=0.3]: Goal distance threshold sent to the planner
• robot_model [string] [default=”shared_robot_model”]: FIXME Use this instead of the robot radius // The name of the robot model node which is used to find a valid goal
• robot_radius [double] [default=0.40]: FIXME Use this instead of the robot radius // The name of the robot model node which is used to find a valid goal ISAAC_PARAM(std::string, robot_model, “shared_robot_model”);
• map [string] [default=”map”]: Name of the map node to use for picking random goals

Description

Scan-to-scan matching model after Fox-Burgard-Thrun Range scan models describe how well two range scans match with each other. The matching result is expressed as a similarity value in the range [0,1]. Similar range scans will result in a value close to one, while dissimilar range scans will give a value close to zero. Range scan models are for example used by scan localization components like the ParticleFilterLocalization or the GridSearchLocalizer. In order for these components to work properly you will have to create a range scan component inside a node and specify the corresponding configuration parameter for the localization components.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• noise_sigma [double] [default=0.25]: A parameter which defines the width of the Guassian for range measurement noise
• unexpected_falloff [double] [default=0.10]: A parameter which defines the shape of the beam model for unexpected obstacles
• max_range [double] [default=100.0]: The maximum range. If the beam range is equal to this value it is considered out of range
• weights [Vector4d] [default=Vector4d(0.25, 0.25, 0.25, 0.11)]: Weights of the four contributions for the beam model in the following order: 0: measurement noise 1: unexpected obstacles 2: random measurement 3: max range
• smoothing [double] [default=0.01]: Smoothing factor for the overall shape function.

Description

Fast scan-to-scan matching model similar to the Fox-Burgard-Thrun model. See comment for RangeScanModelClassic for an explanation on how range scan models are used.

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• max_beam_error_far [double] [default=0.50]: Each beam for which the measured range is further away than the expected range can contribute at most this value to the total error.
• max_beam_error_near [double] [default=0.50]: Similar to max_beam_error_far but for the case then the measured range is closer than the measured range
• percentile [double] [default=0.9]: Specifies the percentile of ranges to use to compute a combined distance over multiple beams. Valid range ]0,1]. If set to 1 all ranges are taken. If set to lower than 1 only the given percentile of beams with the lowest error is taken.
• max_weight [double] [default=15.0]: The maximum weight which can be given to a beam. Beams are weighted linearly based on the average between measured and expected distance up to a maximum of this value.
• sharpness [double] [default=5.0]: The error returned by the distance function is transformed to unit range using the following function: p = exp(-sharpness * error/max_beam_error). If sharpness is zero the actual error will be returned.
• invalid_range_threshold [double] [default=0.5]: Beams with a range smaller than or equal to this distance are considered to have returned an invalid measurement.
• out_of_range_threshold [double] [default=100.0]: Beams with a range larger than or equal to this distance are considered to not have hit an obstacle within the maximum possible range of the sensor.

Description

The RobotRemoteControl class ontrols a robot via commands from a joystick or gamepad. This codelet can also be used as a deadman switch. The codelet can switch between commands received from the control stack via the ctrl channel and between commands received from the joystick via js_state.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• js_state [JoystickStateProto]: Joystick state including information about which buttons are pressed
• ctrl [StateProto]: The command from our controller

Outgoing messages

• segway_cmd [StateProto]: The command send to the segway

Parameters

• disable_deadman_switch [bool] [default=false]: Disables deadman switch no matter a joystick is connected or not
• differential_joystick [bool] [default=true]: If set to true this is using a differential control model. Otherwise a holonomic control model is used.
• manual_button [int] [default=4]: The ID for the button used to manually control the robot with the gamepad. When this button is pressed on the joystick, we enter manual mode where we read speed commands from joystick axes. For a PlayStation Dualshock 4 Wireless Controller, this button corresponds to ‘L1’.
• autonomous_button [int] [default=5]: The ID for the button used to allow the AI to control the output. When this button is pressed but manual button is not pressed on the joystick, we enter autonomous mode where we read speed commands from controller that is transmiting to our ‘ctrl’ channel here. For a PlayStation Dualshock 4 Wireless Controller, this button corresponds to ‘R1’.
• move_axes [int] [default=0]: The axes used for translating the robot in manual mode. For a PlayStation Dualshock 4 Wireless Controller, these axes corresponds to the ‘left stick’.
• rotate_axes [int] [default=1]: The axis used for rotating the robot in manual mode. For a PlayStation Dualshock 4 Wireless Controller, these axes corresponds to the ‘right stick’.
• linear_speed_max [double] [default=1.0]: The maximal allowed manual speed for linear movements.
• angular_speed_max [double] [default=0.8]: The maximal allowed manual speed for rotation.

Description

Visualizes the robot at its current pose

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• robot_mesh [string] [default=”carter”]: Name of the robot assed used for display in sight.

Description

Estimates a set of waypoints over the reachable locations of given map and computes the shortest path through them. Returns a set of points ordered by the shortest path through them.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• waypoints [Plan2Proto]: Output plan, which is a list of poses that the robot can move to

Parameters

• max_distance_factor [double] [default=2.25]: Factor controlling the maximum distance between two points to be connected
• target_cell_size [double] [default=0.50]: The size of step we take to look for freespace and put waypoints
• random_waypoints [int] [default=200000]: Number of random waypoints that we can try and add to the graph
• map [string] [default=”map”]: Name of the map in consideration

Description

Bridge for Virtual Gamepad: - Recieves Virtual Controller State Messages from Sight’s Widget - Virtual Gamepad. - Uses Bidirectional communication between backend and frontend. - Forwards the received controller messages to other c++ codelets (example: RobotRemoteControl) in backend. - Sends relevant backend status info from the codelets to Sight at regular intervals.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• request [nlohmann::json]: Request to the Bridge

Outgoing messages

• reply [nlohmann::json]: Reply from the bridge to Sight
• joystick [JoystickStateProto]: TX proto for Gampepad State

Parameters

• sight_widget_connection_timeout [double] [default=30.0]: Sight Widget Connection Timeout in seconds
• num_virtual_buttons [int] [default=12]: Number of buttons for a simulated Virtual Joystick. Keeping default value consistent with packages/sensors/Joystick.hpp
• deadman_button [int] [default=4]: Button number for failsafe. Keeping consistent with packages/navigation/RobotRemoteControl.hpp

## isaac.perception.AprilTagsDetection¶

Description

AprilTagsDetection takes an image as input and detects and decodes any APrilTags found in the image. It returns an array of Tag IDs as output.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• image [ColorCameraProto]: RGB input image. Image should be undistorted prior to being passed in here.

Outgoing messages

• april_tags [FiducialListProto]: Output, List of AprilTag fiducials

Parameters

• max_tags [int] [default=50]: Maximum number of AprilTags that can be detected
• tag_dimensions [double] [default=0.18]: Tag dimensions, translation of tags will be calculated in same unit of measure
• tag_family [string] [default=”tag36h11”]: Tag family, currently ONLY tag36h11 is supported

## isaac.perception.CropAndDownsample¶

Description

Codelet to crop and downsample the input image. The input image is first cropped to the desired region of interest and then resized to the desired output dimensions.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_image [ColorCameraProto]: Input image

Outgoing messages

• output_image [ColorCameraProto]: Cropped and resized output image

Parameters

• crop_start [Vector2i] [default=]: Top left corner (row, col) for crop
• crop_size [Vector2i] [default=]: Target dimensions (rows, cols) for crop.
• downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.

## isaac.perception.DepthImageFlattening¶

Description

The DepthImageFlattening class flattens a depth image into a 2D range scan.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• depth [DepthCameraProto]: Input depth image

Outgoing messages

• flatscan [FlatscanProto]: Output range scan

Parameters

• camera_frame [string] [default=”camera”]: The name of the camera coordinate frame
• ground_frame [string] [default=”ground”]: The name of the ground coordinate frame
• fov [double] [default=DegToRad(90.0)]: The field of view to use for the result range scan
• sector_delta [double] [default=DegToRad(0.5)]: Angular resolution of the result range scan
• min_distance [double] [default=0.2]: Minimum distance for the result range scan
• max_distance [double] [default=5.0]: Maximum distance for the result range scan
• range_delta [double] [default=0.10]: Range resolution of the result range scan
• cell_blocked_threshold [int] [default=10]: A sector in the result range scan is marked as blocked after the given number of points.
• height_min [double] [default=0.20]: Maximum height in ground coordinates in which a point is considered to be an obstacle
• height_max [double] [default=1.00]: Minimum height in ground coordinates in which a point is considered to be an obstacle
• skip_row [int] [default=0]: Number of pixels in row that are skipped while parsing the image
• skip_column [int] [default=0]: Number of pixels in column that are skipped while parsing the image

## isaac.perception.DepthImageToPointCloud¶

Description

Create a point cloud from a depth image. Every pixel is “unprojected” based on its depth and the camera model. The point cloud is transformed into the desired target frame using the given transformation cloud_T_camera.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• depth [DepthCameraProto]: Input depth image
• color [ColorCameraProto]: Input color image to color points (optional)

Outgoing messages

• cloud [PointCloudProto]: The computed point cloud

Parameters

• use_color [bool] [default=false]: If this is enabled a color image will be used to produce a colored point cloud. This can only be changed at program start.

## isaac.perception.DisparityToDepth¶

Description

Converts a disparity image to depth image using the camera intrinsics and extrinsics

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• disparity [DepthCameraProto]: Receivers Disparity Image
• extrinsics [Pose3dProto]: camera pair extrinsics (right-to-left)

Outgoing messages

• depth [DepthCameraProto]: Publishers The converted depth in meters
Parameters
(none)

## isaac.perception.FiducialAsGoal¶

Description

Looks for a fiducial with a specific ID and uses it as a goal for the navigation stack. The goal can be computed relative to the fiducial based on different methods. 1) “center”: The center of the fiducial is projected into the Z=0 plane and published as the goal point for the navigation stack. 2) “pointing”: A ray is shot out of the center of the fiducial into the direction of the normal and intersected with the Z=0 ground plane. This happens up to a maximum distance of max_goal_tag_distance. 3) “offset”: The fixed offset fiducial_T_goal is used to compute the goal based on the detected fiducial. A goal or plan is published every time a fiducial detection is received. In case the fiducial is not found for longer than give_up_duration a stop command is sent.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• fiducials [FiducialListProto]: The input channel where fiducial detections are publishes

Outgoing messages

• goal [Goal2Proto]: The target fiducial as a goal
• plan [Plan2Proto]: The target fiducial as a simple plan with one waypoint

Parameters

• target_fiducial_id [string] [default=”tag36h11_9”]: The ID of the target fiducial
• give_up_duration [double] [default=1.0]: If the robot does not see the fiducial for this time period the robot is stopped
• mode [Mode] [default=Mode::kCenter]: Specifies how the robot will use the fiducial to compute its goal location.
• max_goal_tag_distance [double] [default=1.0]: The maximum distance the goal with be away from the tag
• robot_frame [string] [default=]: The name of the robot coordinate frame
• camera_frame [string] [default=]: The name of the camera coordinate frame

## isaac.perception.FreespaceFromDepth¶

Description

The FreespaceFromDepth class flattens a depth image into a 2D range scan.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• depth [DepthCameraProto]: Input image use to compute the range scan

Outgoing messages

• flatscan [FlatscanProto]: Output the freespace as a range scan that can be used for example to produce a local map for navigation

Parameters

• last_range_cell_additional_contribution [double] [default=2.5]: In order to favor the last cell in case there is no obstacle, we arbitrarily increase the value by this factor scaled by the average occupancy.
• edge_distance_cost [double] [default=0.5]: Factor to compute the cost of an edge (multiplied by the distance) Reducing this value might increase processing time.
• max_edge_cost [double] [default=1.0]: Cap on the maximum cost of an edge (Reducing this value might speed up the processing time.)
• max_contribution_after_wall [double] [default=2.5]: Once we hit a wall, we cap the value of a cell at: max_contribution_after_wall * average_weight
• wall_threshold [double] [default=5.0]: The minmum value needed for a cell to be considered as a wall (as a factor of the average value.)
• fov [double] [default=DegToRad(90.0)]: The field of view to use for the result range scan
• num_sectors [int] [default=180]: Angular resolution of the result range scan
• range_delta [double] [default=0.1]: Range resolution of the result range scan
• height_min [double] [default=-1.00]: Maximum height in ground coordinates in which a point is considered valid
• height_max [double] [default=2.00]: Minimum height in ground coordinates in which a point is considered valid
• max_distance [double] [default=20.0]: Max range for the extraction.
• reduce_scale [int] [default=2]: Reduction factor for image. Values greater than one shrink the image by that amount
• integrate_temporal_information [bool] [default=false]: Reduction factor for image. values greater than one shrink the image by that amount
• use_predicted_height [bool] [default=false]: Whether to use the predcted height (from measurement) or 0 when rendering the freespace
• camera_name [string] [default=]: Name of the camera used to get the camera position in the world

## isaac.perception.ImageUndistortion¶

Description

Apply geometric correction to an input image, publishing the undistorted result. The camera supplying the input image can use either a perspective or a fisheye lens, and this will remove radial distortion and project to an ideal perspective projection. If the lens is a perspective lens, tangential distortion can also be corrected. The input image is assumed to be in Image3ub format, and the resultant output image is delivered in Image3ub format.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_image [ColorCameraProto]: The input image and its optical parameters. The parameters include focal length, principal point, radial and tangential distortion parameters, and projection type (perspective or fisheye).

Outgoing messages

• output_image [ColorCameraProto]: The output image and its optical parameters. The output parameters are set to best match the source: the same focal length and principal point, but otherwise an undistorted ideal prespective image.

Parameters

• down_scale_factor [int] [default=4]: Scaling of the displayed images in Sight. down_scale_factor the ratio of the size of the source image to the displayed image.
• gpu_id [int] [default=0]: The GPU device to be used for Warp360 CUDA operations. The default value of 0 suffices for cases where there is only one GPU, and is a good defualt when there is more than 1 GPU.

## isaac.perception.RangeScanFlattening¶

Description

Flattens a 3D range scan into a 2D range scan. We assume that a range scan is made up out of vertical “slices” of beams which are rotated around the lidar at specific azimuth angles. For each azimuth angle all beams of the vertical slice are analysed and compared to a 2.5D world model to compute a single distance value for that azimuth angle. The pair of azimuth angle and distance are published as a “flat” range scan. The 2.5D world model assumes that every location in the X/Y plane is either blocked or free. To compute that we assume a critical height slice relative to the lidar defined my a minimum and maximum height. If any return beam of the vertical slice hits an obstacle in that height slice the flat scan will report a hit. In addition to the height interval we also allow for a fudge on the pitch angle of the lidar which will be an additional rejection criteria. Essentially every beam return has to be inside the height slice not only for the beam angle alpha, but for all angles in the interval [alpha - fudge | alpha + fudge].

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• scan [RangeScanProto]: Incoming 3D range scan

Outgoing messages

• flatscan [FlatscanProto]: Outgoing “flat” range scan

Parameters

• use_target_pitch [bool] [default=false]: Enables usage of target pitch parameter
• target_pitch [double] [default=]: If this value is set only beams with this pitch angle will be used; otherwise all beams of a vertical beam slice will be used.
• height_min [double] [default=0.0]: Minimum relative height for accepting a return as a collision.
• height_max [double] [default=1.5]: Maximum relative height for accepting a return as a collision.
• pitch_fudge [double] [default=0.005]: Inaccuracy of vertical beam angle which can be used to compensate small inaccuracies of the lidar inclination angle.

## isaac.perception.RangeToPointCloud¶

Description

The RangeToPointCloud class converts a range scan into a point cloud.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• scan [RangeScanProto]: The range scan which is to be converted to a point cloud

Outgoing messages

• cloud [PointCloudProto]: The point cloud computed from the range scan

Parameters

• min_fov [double] [default=DegToRad(360.0)]: Number of rays to accumulate before sending out the message (in addition to min_count)
• min_count [int] [default=360]: Minimum number of points before sending a point cloud (in addition to min_fov)
• enable_visualization [bool] [default=false]: If set to true the point cloud is visualized with Sight

## isaac.perception.ScanAccumulator¶

Description

Accumulates slices of a range scans into a full range scan. This can for example be use to accumulate the small slices produced by a rotating lidar into a full 360 degree range scan.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• scan [RangeScanProto]: Proto used to subscribe to partial scan lidar data

Outgoing messages

• fullscan [RangeScanProto]: Proto used to publish full scan lidar data

Parameters

• min_fov [double] [default=DegToRad(360.0)]: Minimum FOV before sending out the message (in addition to min_slice_count)
• min_slice_count [int] [default=1800]: Number of slices to accumulate before sending out the message (in addition to min_fov)
• clock_wise_rotation [bool] [default=true]: Turning direction of the LIDAR

## isaac.perception.StereoDisparityNet¶

Description

StereoDisparityNet takes a pair of left and right images as input and infers disparity using the NVStereoNet library. The network expects an input of 257 x 513. The network outputs disparities in left_camera space.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• left [ColorCameraProto]: Receivers Left camera image
• right [ColorCameraProto]: Right camera image

Outgoing messages

• left_disparity [DepthCameraProto]: Publishers The inferred depth in meters

Parameters

• weights_file [string] [default=]: Configurable Parameters path to the weights file
• plan_file [string] [default=]: path to the plan file. plan file is specific to sm version of the GPU
• fp16_mode [bool] [default=false]: flag to turn on half precision for tensorrt. This is currently not supported on desktop gpus and will only work on tx2/xavier

## isaac.perception.StereoImageSplitting¶

Description

StereoImageSplitting splits a side-by-side stereo image into a left image and a right image. Input images are assumed to be all in Image3ub format.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• stereo [ColorCameraProto]: Input stereo image

Outgoing messages

• left [ColorCameraProto]: Output left image
• right [ColorCameraProto]: Output right image

Parameters

• copy_pinhole_from_source [bool] [default=true]: If true, the pinhole is copied from the source and the column count is adjusted to half the original column count.
• left_rows [int] [default=]: Number of pixels in the height dimension of left image
• left_cols [int] [default=]: Number of pixels in the width dimension of left image
• left_focal_length [Vector2d] [default=]: Focal length of the left image
• left_optical_center [Vector2d] [default=]: Optical center for the left image
• right_rows [int] [default=]: Number of pixels in the height dimension of left image
• right_cols [int] [default=]: Number of pixels in the width dimension of left image
• right_focal_length [Vector2d] [default=]: Focal length of the right image
• right_optical_center [Vector2d] [default=]: Optical center for the right image

## isaac.planner.DifferentialBaseControl¶

Description

Controller node for a differential base. Takes a trajectory plan and output a segway command.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• plan [DifferentialTrajectoryPlanProto]: Input: the plan to follow (contain a list of state at a given timestamp)

Outgoing messages

• cmd [StateProto]: Output a navigation::DifferentialBaseControl state message.

Parameters

• cmd_delay [double] [default=0.2]: Expected delay between the command sent and the execution (in second)
• use_pid_controller [bool] [default=true]: Whether or not use the pid controller
• manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
• pid_heading [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 1.0, -1.0})]: Parameters of pid controller that controls the heading error
• pid_pos_y [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the latteral error
• pid_pos_x [Vector7d] [default=Vector7d((double[]){0.2, 0.05, 0.0, 0.1, -0.1, 2.0, -2.0})]: Parameters of pid controller that controls the forward error
• controller_epsilon_gain [double] [default=0.5]: Gains used to compute the forward gain
• controller_b_gain [double] [default=0.5]: Gains used to compute the heading gain
• use_tick_time [bool] [default=true]: This flag controls whether or not this task uses tick time or the acquisition time to know which command to output. Note: Acquisition time should be used when the DifferentialTrajectoryPlanProto comes with a not synchronized source. cmd_delay should be used to estimate the full delay from when the odometry was computed to when the command is going to be executed on the system.

## isaac.planner.DifferentialBaseLqrPlanner¶

Description

The DifferentialBaseLqrPlanner class computes and outputs the local plan given the position of the robot and its surroundings. The local plan is computed using the lqr planner. TODO(ben): Rename with Differential in the name

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• odometry [Odometry2Proto]: Contains the odometry information required for planning (current speed, acceleration, etc.)
• global_plan [Plan2Proto]: Contains the target plan the local planner attempts to follow

Outgoing messages

• plan [DifferentialTrajectoryPlanProto]: Contains a series of poses to form the trajectory that is optimal to follow

Parameters

• robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
• obstacle_world_component [string] [default=”obstacle_world/obstacle_world”]: Name of the ObstacleWorld component
• time_between_command_ms [int] [default=100]: Step size to be used in integrating the state
• num_controls [int] [default=50]: Upper limit on the number of steps in the output trajectory plan
• target_distance [double] [default=0.25]: parameters for the GridMapObstaclesLqr parameters Distance we would like to keep away from surroundings
• speed_gradient_target_distance [double] [default=1.0]: How fast the target distance increases depending on the speed
• min_distance [double] [default=0.1]: Distance we want to keep away from surroundings before kicking high penality
• speed_gradient_min_distance [double] [default=0.0]: How fast the minimum distance increases depending on the speed.
• gain_speed [double] [default=1.0]: parameters for the DifferentialLqr parameters Gain of a quadratic cost to penalize a speed outsde the range defined below
• gain_steering [double] [default=0.0]: Gain of a quadratic cost to penalize any steering
• gain_lat_acceleration [double] [default=0.2]: Gain of a quadratic cost to penalize the lateral acceleration
• gain_linear_acceleration [double] [default=4.0]: Gain of a quadratic cost to penalize the forward acceleration
• gain_angular_acceleration [double] [default=2.0]: Gain of a quadratic cost to penalize the angular acceleration
• gain_to_target [double] [default=0.1]: Gain of a custom cost to penalize the robot according to its distance to the target
• gain_to_end_position_x [double] [default=20.0]: Gain of a quadratic cost to penalize the last position in forward/backward direction relative to the target
• gain_to_end_position_y [double] [default=50.0]: Gain of a quadratic cost to penalize the last position in lateral direction relative to the target
• gain_to_end_angle [double] [default=1.0]: Gain of a quadratic cost to penalize the robot if its orientation does not match the target
• gain_to_end_speed [double] [default=10.0]: Gain of a quadratic cost to penalize the robot if it is still moving
• gain_to_end_angular_speed [double] [default=10.0]: Gain of a quadratic cost to penalize the robot if it is still rotating
• max_speed [double] [default=0.75]: Soft limit on how fast we would like to move
• min_speed [double] [default=-0.0]: Soft limit on how slow we are allowed to move
• distance_to_target_sigma [double] [default=1.0]: Other parameters: Parameter that controls the strength of the gradient depending on the distance of the target The error cost is of the form: d^2/(d^2 + s^2). It behaves as a quadratic cost close to the target and as a constant value far away from the target.
• decay [double] [default=1.01]: Decay apply to each steps (decay < 1 means we accord higher importance to the beginning of the path while decay > 1 emphasizes the end of the path).
• local_maps [std::vector<std::string>] [default={}]: List of local_map to use for the planning. The ObstacleWorld is querried.
• use_predicted_position [bool] [default=true]: Indicates whether or not the predicted position or actual position is used while planning. If true, this produces a more stable path, however it relies on a good controller to keep the robot on track. If false, then this codelet also acts as a controller.
• max_predicted_position_error [double] [default=0.5]: The distance from the predicted position we tolerate. If we exceed this value, the actual robot position is used.
• manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
• print_debug [bool] [default=false]: Specifies whether to show extra information in Sight for debug purposes
• reuse_lqr_plan [bool] [default=true]: Specifies whether or not to use the previous plan as starting point for the lqr
• restart_planning_cycle [int] [default=10]: How frequently (in term of ticks) do we restart the planning from scratch: - 0: Disable it, never restart it (unless reuse_lqr_plan is set to false) - 1: Never reuse the plan (regardless of the value of reuse_lqr_plan). - 10 Assuming reuse_lqr_plan = true, means every 10 ticks we drop the previous plan and replan from a stopped position.
• static_frame [string] [default=”world”]: Name of a frame which is static. This is used to compensate for the odometry drift.

## isaac.planner.DifferentialBaseModel¶

Description

Holder of common parameters describing the differential base (two independant controllable wheels defined by the wheel radius and distance between wheels).

Type: Component - This component does not tick and only provides certain helper functions.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• robot_radius [double] [default=0.40]: The radius of the robot for colision detection.
• base_length [double] [default=0.63]: The distance between the two wheels
• wheel_radius [double] [default=0.2405]: The radius of the wheels

## isaac.planner.DifferentialBaseStop¶

Description

Generates zero commands for a differential base.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)

Outgoing messages

• cmd [StateProto]: Output a navigation::DifferentialBaseControl command consisting of zero linear and angular speeds.
Parameters
(none)

## isaac.planner.GlobalPlanner¶

Description

Global planner, take a target destination using the config and outputs a global plan from current position to the target. Alternatively, the planner can also receives target destination from other nodes through PROTO_RX

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• goal [Goal2Proto]: The target destination received

Outgoing messages

• plan [Plan2Proto]: The computed global plan

Parameters

• graph_initialization_steps [int] [default=20000]: How many random samples to use while pre-computing the graph.
• graph_in_tick_steps [int] [default=100]: How many random samples to use during each tick to increase the graph size.
• threshold [double] [default=0.2]: Map threshold to consider a cell as blocked
• local_optimization_look_ahead [double] [default=20.0]: How much of the path we want to optimize (set to 0 to not run it).
• robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
• map [string] [default=”map”]: Map node to check if pathes are valid
• obstacle_world_component [string] [default=”obstacle_world/obstacle_world”]: Name of the ObstacleWorld component
• local_maps [string] [default=”“]: The name of the local_map to use for the planning. The ObstacleWorld will be querried.
• model_error_margin [double] [default=0.01]: How close to obstacle the robot can be (in meters).
• model_max_distance [double] [default=2.0]: Maximum distance between two points to be connected (in meters). A shorter distance produces a denser graph. In general a value in the order of the average distance of any point to the closest obstacle is recommended.
• model_min_increment [double] [default=0.2]: We interpolate along the path to check the validity, this value is the minimum jump used. Needs to be strictly greather than 0 to guarantee the function returns. A good value is ~ 2 * error_margin.
• model_local_cell_size [double] [default=0.1]: Pixel size of the local map
• model_local_max_distance [double] [default=10.0]: Distance outside the local map (it needs to be ~the size of the map)
• model_invalid_penalty [double] [default=5.0]: Cost of moving inside an invalid position (local map)
• max_colliding_lookup [double] [default=0.2]: How much distance into obstacle do we tolerate for the starting position and the target.
• opt_min_improvement [double] [default=0.0001]: If the improvement is below this value the gradient descent will stop.
• opt_obstacle_gain [double] [default=25.0]: Gain of the distance to obstackes
• opt_distance_gain_factor [double] [default=100.0]: Gain to keep waypoints close to each other
• opt_dist_waypoint_factor [double] [default=2.5]: Distance between created intermediate waypoints (in pixel)
• opt_dist_obstacle_factor [double] [default=3.0]: Target distance to obstacle (in robot radius)
• opt_max_number_waypoints [int] [default=250]: Maximum number of waypoints that will be created. If it is exceed optimize will return false.
• opt_line_search_iterations [int] [default=50]: How many iterations for each line search
• opt_line_search_decay [double] [default=0.5]: Line search decay
• opt_max_iterations [int] [default=200]: Max number of iterations

## isaac.planner.HolonomicBaseControl¶

Description

Controller node for a differential base. Takes a trajectory plan and output a segway command.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• plan [DifferentialTrajectoryPlanProto]: Plan (position/time) the controller is trying to follow. TODO: Should not take a DifferentialTrajectoryPlanProto

Outgoing messages

• cmd [StateProto]: Output a navigation::DifferentialBaseControl state message.

Parameters

• cmd_delay [double] [default=0.2]: Expected delay between the command sent and the execution (in second)
• use_pid_controller [bool] [default=true]: Whether or not use the pid controller
• manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
• pid_heading [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 1.0, -1.0})]: Parameters of pid controller that controls the heading error
• pid_pos_y [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the latteral error
• pid_pos_x [Vector7d] [default=Vector7d((double[]){0.2, 0.05, 0.0, 0.1, -0.1, 2.0, -2.0})]: Parameters of pid controller that controls the forward error
• use_tick_time [bool] [default=true]: This flag controls whether or not this task uses tick time or the acquisition time to know which command to output. Note: Acquisition time should be used when the DifferentialTrajectoryPlanProto comes with a not synchronized source. cmd_delay should be used to estimate the full delay from when the odometry was computed to when the command is going to be executed on the system.

## isaac.pwm.PwmController¶

Description

Interface for a PCA9685 PWM Controller device This device is used to send PWM signals to peripherals

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• set_duty_cycle [PwmChannelSetDutyCycleProto]: PwmChannelSetDutyCycleProto is used to set a duty cycle for a PWM channel note: setting a PWM value for a channel automatically enables that channel duty_cycle as a percentage, from 0.00 to 1.00
• set_pulse_length [PwmChannelSetPulseLengthProto]: PwmChannelSetPulseLengthProto is used to set a pulse length for a PWM channel pulse_length as a percentage, from 0.00 to 1.00 of the cycle
Outgoing messages
(none)

Parameters

• i2c_device_num [int] [default=0]: I2C device ID; matches /dev/i2c-X
• pwm_frequency_in_hertz [int] [default=50]: Defines the frequency at which the PWM outputs modulate, in hertz 50Hz is common for servos

## isaac.sight.AliceSight¶

Description

Interface for sight. Provide a default implementation which does nothing.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)
Parameters
(none)

## isaac.sight.WebsightServer¶

Description

The webSightServer class serves the frontent web visualization. Data is sent over a websocket defined by a predefined API.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• port [int] [default=3000]: Port for the communication between web server and Sight
• webroot [string] [default=”packages/sight/webroot”]: Path to the files needed for Sight
• assetroot [string] [default=”~/isaac-lfs/sight/assets”]: Path to the assests like pictures
• bandwidth [int] [default=10000000]: Bandwidth to limit the rate of data transfer
• use_compression [bool] [default=false]: Whether to compress data for transfer
• ui_config [json] [default=(nlohmann::json{{“windows”, {}}})]: Configuration for User Interface (UI)

## isaac.stereo_depth.CoarseToFineStereoDepth¶

Description

CoarseToFineStereoDepth takes a pair of left and right images as input and infers depth using the NVStereomatcher library. It utilizes CUDA to speed up the computation of depth by running it on the GPU. This codelet also takes in the extrinsics of the camera pair and outputs depth as perceived by the left camera. The NVStereoMatcher library uses RGBA buffers, so RGB images are copied into RGBA buffers before running depth estimation.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• left_image [ColorCameraProto]: RGB input images. Images should be rectified and undistorted prior to being passed in here. RGB input left image
• right_image [ColorCameraProto]: RGB input right image

Outgoing messages

• left_depth_image [DepthCameraProto]: The inferred depth in meters (from the view of the left camera).

Parameters

• baseline [double] [default=0.12]: default baseline for the stereo camera (in meters) if no extrinsics provided
• min_depth [double] [default=0.0]: minimum depth of the scene (in meters)
• max_depth [double] [default=20.0]: maximum depth of the scene (in meters)

## isaac.utils.FlatscanToPointCloud¶

Description

Converts a flatscan to a 3D point cloud. This is useful to run point cloud based algorithms on flatscan for example for scan-to-scan matching. In many cases however much more efficient algorithms could be written for the two-dimensional case of a flatscan.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• flatscan [FlatscanProto]: Input flatscan

Outgoing messages

• cloud [PointCloudProto]: Output 3D point cloud
Parameters
(none)

## isaac.viewers.ColorCameraViewer¶

Description

Visualizes a color camera image in sight. This is useful to limit the bandwith used for visualization.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• color_listener [ColorCameraProto]: 8-bit RGB color camera to visualize
Outgoing messages
(none)

Parameters

• target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight.
• reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor.
• camera_name [string] [default=”“]: Frame of the camera (to get the position from the PoseTree)

## isaac.viewers.DepthCameraViewer¶

Description

DepthCameraViewer visualizes a depth camera image in sight. This is useful to limit the bandwith used for visualization.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• depth_listener [DepthCameraProto]: 32-bit float depth image to visualize
Outgoing messages
(none)

Parameters

• target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight
• reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor
• min_visualization_depth [double] [default=0.0]: Minimum depth in meters used in color grading the depth image for visualization
• max_visualization_depth [double] [default=32.0]: Maximum depth in meters used in color grading the depth image for visualization
• colormap [std::vector<Vector3i>] [default=]: A color gradient used for depth visualization. The min_visualization_depth gets mapped to the first color, the max gets mapped to last color. Everything else in between gets interpolated.
• camera_name [string] [default=”“]: Name of the camera used to get the camera pose from the pose tree (optional)
• enable_depth_point_cloud [bool] [default=false]: Enable depth point cloud visualization, can slow down sight if too many points are being drawn

## isaac.viewers.DetectionsViewer¶

Description

This codelet shows detections in an image

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• detections [Detections2Proto]: Bounding box in pixel coordinates and class label of objects in an image
Outgoing messages
(none)

Parameters

• reduce_scale [int] [default=1]: Reduction factor for bounding boxes, values greater than one will shrink the box by that amount. Should match the factor of the image being drawn upon.

## isaac.viewers.MosaicViewer¶

Description

Peaks the sample tensors from SampleAccumulator Component, grabs one tensor from each sample at specified index and visualizes them to sight as a single mosaic image. The SampleAccumulator Component must resides in the same node with MosaicViewer.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages
(none)
Outgoing messages
(none)

Parameters

• grid_size [Vector2i] [default=]: Numbers of cells on each axle
• mosaic_size [Vector2i] [default=]: Dimensions of final image
• sample_tensor_index [int] [default=0]: The index of tensor to visualize in each sample (TensorList)

## isaac.viewers.PointCloudViewer¶

Description

Visualizes a point cloud in sight. This component is useful to limit the overall bandwidth when displaying a point cloud.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• cloud [PointCloudProto]: The point cloud which will be visualized in sight.
Outgoing messages
(none)

Parameters

• target_fps [double] [default=10.0]: Maximum framerate at which images are displayed in sight.
• skip [int] [default=11]: If set to a value greater than 1 points will be skipped. For example skip = 2 will skip half of the points. Use this value to limit the number of points visualized in sight.
• max_distance [double] [default=5.0]: Points which have a depth (z-component) greater than this value will be skipped
• frame [string] [default=]: The coordinate frame in which the point cloud is visualized.

## isaac.viewers.SegmentationCameraViewer¶

Description

Class that receives Segmentation camera information from simulator

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• segmentation_listener [SegmentationCameraProto]: The segmentation_listener object receives 8 bit class, 16 bit instance and class label (string, int pair) information from a SegmentationCameraProto message
Outgoing messages
(none)

Parameters

• target_fps [double] [default=30.0]: Target FPS used to show images to sight, decrease to reduce overall bandwidth needed
• reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that amount
• camera_name [string] [default=]: Frame of the camera (to get the position from the PoseTree)

## isaac.viewers.SegmentationViewer¶

Description

Visualizes a pixel-wise segmentation on top of a camera image. This component supports synchronization and transparancy overlay.

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• color [ColorCameraProto]: The original camera image
• segmentation [SegmentationCameraProto]: Pixel-wise image segmentation which is overlayed on top of the camera image
Outgoing messages
(none)

Parameters

• max_fps [double] [default=20.0]: Maximum FPS for show images to sight which can be used to reduce overall bandwidth
• reduce_scale [int] [default=2]: Reduction factor for image, values greater than one will shrink the image by that amount
• highlight_label [int] [default=0]: The label which will be overlayed on top of the color image.
• highlight_color [Pixel3ub] [default=Pixel3ub(255, 255, 255)]: The color which is used to overlay the label.
• opacity [double] [default=0.5]: Opacity (0.0: full transparent, 1.0: full overlay) of the overlayed labels
• camera_name [string] [default=]: Frame of the camera (to get the position from the PoseTree)

## isaac.viewers.TensorListViewer¶

Description

Takes TensorListProto as input and visualizes one specified tensor in it, * dimension of 1 are ignored, e.g., 1x1x8 tensor is considered as rank 1; * only float tensor is supported and colorized with StarryNightColorGradient; * rank 1 tensor could be visualized as 1-by-N image, or rectangle image with specified width; * rank 2 tensor is visualized as rectangle image; * rank 3 tensor is visualized as stitched slices, with first dimension used as slice indices; * rank 4 or higher rank tensor is not supported;

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• input_tensors [TensorListProto]: Receives TensorList
Outgoing messages
(none)

Parameters

• tensor_index [int] [default=0]: The index of tensor to visualize in each message (TensorList)
• resize_cols [int] [default=0]: Number of columns after resizing rank-1 tensor. 0 means no resizing.
• max_value [float] [default=1.0f]: Saturizes if value is beyond maximum value that we care
• min_value [float] [default=0.0f]: Cut-off if value is below minimum value that we care
• render_size [Vector2i] [default=Vector2i::Zero()]: Enlarge/shrinks for rendering
• render_png [bool] [default=false]: Renders tensor image as PNG if true, otherwise renders as JPG

## isaac.yolo.YoloTensorRTInference¶

Description

Yolo Tensorrt Inference loads a network trained in darknet format and optimizes the network using Tensorrt for a given configuration. The optimized model is then loaded to run inference on the RGB image

Type: Codelet - This component ticks either periodically or when it receives messages.

Incoming messages

• rgb_image [ColorCameraProto]: Input image

Outgoing messages

• output_detection_tensors [TensorListProto]: Output tensorlist from Yolo TensortInference in the format proto[0]- Bounding Box Parameters which include {{bounding_box1{x1, y1, x2, y2}, objectness, {probability0, probability1,…probability<N>}}, {bounding_box2{x1, y1, x2, y2}, objectness, {probability0, probability1… probability<N>}} …..{bounding_box<K>{x1, y1, x2, y2}, objectness, {probability0, probability1, probability2…. probability<N>}}} N : Number of classes network is trained on K : Number of Bounding Boxes predicted bounding_box<K> - Minimum and maximum (x, y) coordinates proto[1] - Network config parameters which include {network_width, network_height, image_width, image_height, number_classes trained on, number of parameters for each bounding box{excluding probability of classes}

Parameters

• yolo_config_json [json] [default=nlohmann::json({})]: Yolo config json