This following section contains a list of all components which are available in Isaac SDK. For each component, the incoming and outgoing message channels and the corresponding message types are listed. Additionally, all parameters with their names and types and corresponding default values are explained.
The following table gives an overview over all components. The columns ‘# Incoming’, ‘# Outgoing’ and ‘# Parameters’ indicate how many incoming message channels, outgoing message channels and parameters the corresponding module has.
Namespace | Name | # Incoming | # Outgoing | # Parameters |
---|---|---|---|---|
isaac | AdafruitNeoPixelLedStrip | 1 | 0 | 1 |
isaac | ArgusCsiCamera | 0 | 1 | 5 |
isaac | ImageComparison | 2 | 0 | 2 |
isaac | Joystick | 0 | 1 | 7 |
isaac | LivoxLidar | 0 | 1 | 4 |
isaac | PanTiltDriver | 1 | 2 | 14 |
isaac | RealsenseCamera | 0 | 2 | 11 |
isaac | SegwayRmpDriver | 1 | 1 | 5 |
isaac | SerialBMI160 | 0 | 1 | 1 |
isaac | SimpleLed | 0 | 1 | 0 |
isaac | SlackBot | 1 | 1 | 2 |
isaac | StereoVisualOdometry | 5 | 1 | 5 |
isaac | TorchInferenceTestDisplayOutput | 1 | 0 | 0 |
isaac | TorchInferenceTestSendInput | 0 | 1 | 1 |
isaac | V4L2Camera | 0 | 1 | 16 |
isaac | Vicon | 0 | 2 | 3 |
isaac | ZedCamera | 0 | 5 | 15 |
isaac.alice | ChannelMonitor | 0 | 0 | 2 |
isaac.alice | Config | 0 | 0 | 0 |
isaac.alice | Failsafe | 0 | 0 | 1 |
isaac.alice | FailsafeHeartbeat | 0 | 0 | 3 |
isaac.alice | JsonToProto | 0 | 1 | 0 |
isaac.alice | MessageLedger | 0 | 0 | 1 |
isaac.alice | Pose | 0 | 0 | 0 |
isaac.alice | Pose2Comparer | 0 | 0 | 5 |
isaac.alice | PoseInitializer | 0 | 0 | 8 |
isaac.alice | PoseMessageInjector | 1 | 0 | 0 |
isaac.alice | PoseToMessage | 0 | 1 | 2 |
isaac.alice | PoseTree | 0 | 0 | 0 |
isaac.alice | ProtoToJson | 0 | 1 | 0 |
isaac.alice | PyCodelet | 0 | 0 | 1 |
isaac.alice | Random | 0 | 0 | 2 |
isaac.alice | Recorder | 0 | 0 | 3 |
isaac.alice | Replay | 0 | 0 | 4 |
isaac.alice | ReplayBridge | 1 | 1 | 1 |
isaac.alice | Scheduling | 0 | 0 | 4 |
isaac.alice | Sight | 0 | 0 | 0 |
isaac.alice | Subgraph | 0 | 0 | 0 |
isaac.alice | Subprocess | 0 | 0 | 2 |
isaac.alice | TcpPublisher | 0 | 0 | 1 |
isaac.alice | TcpSubscriber | 0 | 0 | 4 |
isaac.alice | Throttle | 0 | 0 | 6 |
isaac.alice | TimeOffset | 0 | 0 | 3 |
isaac.alice | TimeSynchronizer | 0 | 0 | 0 |
isaac.audio | AudioCapture | 0 | 1 | 5 |
isaac.audio | AudioEnergyCalculation | 1 | 1 | 2 |
isaac.audio | AudioFileLoader | 1 | 1 | 3 |
isaac.audio | AudioPlayback | 1 | 0 | 1 |
isaac.audio | SaveAudioToFile | 1 | 0 | 2 |
isaac.audio | SoundSourceLocalization | 1 | 1 | 4 |
isaac.audio | TensorToAudioDecoder | 1 | 1 | 2 |
isaac.audio | TextToMel | 1 | 1 | 1 |
isaac.audio | VoiceCommandConstruction | 1 | 1 | 7 |
isaac.audio | VoiceCommandFeatureExtraction | 1 | 1 | 7 |
isaac.deepstream | Pipeline | 0 | 0 | 1 |
isaac.detect_net | DetectNetDecoder | 2 | 1 | 6 |
isaac.dynamixel | DynamixelDriver | 1 | 1 | 10 |
isaac.flatsim | DifferentialBasePhysics | 1 | 1 | 6 |
isaac.flatsim | DifferentialBaseSimulator | 2 | 2 | 7 |
isaac.flatsim | FlatscanNoiser | 1 | 1 | 7 |
isaac.flatsim | HolonomicBaseSimulator | 2 | 2 | 13 |
isaac.flatsim | SimRangeScan | 0 | 1 | 7 |
isaac.fuzzy | EfllFuzzyEngineExample | 0 | 0 | 0 |
isaac.fuzzy | LfllFuzzyEngineExample | 0 | 0 | 0 |
isaac.gtc_china | PanTiltGoto | 1 | 1 | 3 |
isaac.hgmm | HgmmPointCloudMatching | 1 | 0 | 10 |
isaac.imu | IioBmi160 | 0 | 1 | 2 |
isaac.imu | ImuCalibration2D | 1 | 0 | 3 |
isaac.imu | ImuCorrector | 1 | 1 | 3 |
isaac.imu | ImuSim | 1 | 1 | 7 |
isaac.json | JsonMockup | 0 | 1 | 1 |
isaac.json | JsonReplay | 0 | 1 | 1 |
isaac.json | JsonWriter | 2 | 0 | 2 |
isaac.kaya | KayaBaseDriver | 2 | 2 | 4 |
isaac.kinova_jaco | KinovaJaco | 2 | 4 | 2 |
isaac.lidar_slam | Cartographer | 1 | 0 | 6 |
isaac.lidar_slam | GMapping | 2 | 0 | 14 |
isaac.map | Map | 0 | 0 | 2 |
isaac.map | MapBridge | 1 | 1 | 0 |
isaac.map | ObstacleAtlas | 0 | 0 | 1 |
isaac.map | OccupancyGridMapLayer | 0 | 0 | 3 |
isaac.map | PolygonMapLayer | 0 | 0 | 5 |
isaac.map | Spline | 0 | 1 | 1 |
isaac.map | WaypointMapLayer | 0 | 0 | 1 |
isaac.message_generators | CameraGenerator | 0 | 3 | 2 |
isaac.message_generators | ConfusionMatrixGenerator | 0 | 1 | 0 |
isaac.message_generators | Detections2Generator | 0 | 1 | 1 |
isaac.message_generators | DifferentialBaseControlGenerator | 0 | 1 | 2 |
isaac.message_generators | DifferentialBaseStateGenerator | 0 | 1 | 4 |
isaac.message_generators | FlatscanGenerator | 0 | 1 | 6 |
isaac.message_generators | HolonomicBaseControlGenerator | 0 | 1 | 3 |
isaac.message_generators | ImageLoader | 0 | 2 | 12 |
isaac.message_generators | LatticeGenerator | 0 | 1 | 5 |
isaac.message_generators | PanTiltStateGenerator | 0 | 1 | 8 |
isaac.message_generators | Plan2Generator | 0 | 1 | 5 |
isaac.message_generators | PointCloudGenerator | 0 | 1 | 5 |
isaac.message_generators | PoseGenerator | 0 | 0 | 4 |
isaac.message_generators | RangeScanGenerator | 0 | 1 | 11 |
isaac.message_generators | RigidBody3GroupGenerator | 0 | 1 | 7 |
isaac.message_generators | TensorGenerator | 0 | 1 | 2 |
isaac.message_generators | TrajectoryListGenerator | 0 | 1 | 4 |
isaac.ml | ColorCameraEncoderCpu | 1 | 1 | 4 |
isaac.ml | ColorCameraEncoderCuda | 1 | 1 | 5 |
isaac.ml | ConfusionMatrixAggregator | 1 | 1 | 1 |
isaac.ml | Detection3Encoder | 1 | 1 | 1 |
isaac.ml | DetectionComparer | 2 | 1 | 2 |
isaac.ml | DetectionEncoder | 1 | 1 | 2 |
isaac.ml | DetectionImageExtraction | 2 | 1 | 3 |
isaac.ml | Detections3Comparer | 2 | 1 | 0 |
isaac.ml | EvaluateSegmentation | 2 | 0 | 0 |
isaac.ml | FilterDetectionsByLabel | 1 | 1 | 2 |
isaac.ml | GenerateKittiDataset | 2 | 0 | 3 |
isaac.ml | HeatmapDecoder | 1 | 1 | 2 |
isaac.ml | HeatmapEncoder | 1 | 1 | 0 |
isaac.ml | LabelToBoundingBox | 1 | 1 | 2 |
isaac.ml | ResizeDetections | 1 | 1 | 2 |
isaac.ml | RigidbodyToDetections3 | 1 | 1 | 1 |
isaac.ml | SampleAccumulator | 0 | 0 | 3 |
isaac.ml | SegmentationComparer | 2 | 1 | 2 |
isaac.ml | SegmentationDecoder | 1 | 1 | 1 |
isaac.ml | SegmentationEncoder | 1 | 1 | 7 |
isaac.ml | Teleportation | 1 | 2 | 26 |
isaac.ml | TensorArgMax | 1 | 1 | 2 |
isaac.ml | TensorChannelSum | 1 | 1 | 2 |
isaac.ml | TensorRTInference | 0 | 0 | 13 |
isaac.ml | TensorReshape | 1 | 1 | 1 |
isaac.ml | TensorflowInference | 0 | 0 | 4 |
isaac.ml | TorchInference | 0 | 0 | 4 |
isaac.ml | WaitUntilDetection | 1 | 1 | 2 |
isaac.ml | YoloDecoder | 2 | 1 | 3 |
isaac.navigation | BinaryToDistanceMap | 2 | 1 | 5 |
isaac.navigation | CollisionMonitor | 1 | 1 | 3 |
isaac.navigation | DetectionsToAtlas | 1 | 0 | 1 |
isaac.navigation | DifferentialBaseOdometry | 1 | 1 | 6 |
isaac.navigation | DifferentialBaseWheelImuOdometry | 2 | 1 | 9 |
isaac.navigation | DistanceMap | 0 | 0 | 1 |
isaac.navigation | FollowPath | 2 | 1 | 7 |
isaac.navigation | GoToBehavior | 1 | 0 | 0 |
isaac.navigation | GoalMonitor | 2 | 1 | 3 |
isaac.navigation | GoalToPlan | 1 | 1 | 0 |
isaac.navigation | GotoWaypointBehavior | 1 | 0 | 2 |
isaac.navigation | GradientDescentLocalization | 1 | 0 | 1 |
isaac.navigation | GridSearchLocalizer | 1 | 0 | 8 |
isaac.navigation | HolonomicBaseWheelImuOdometry | 2 | 1 | 8 |
isaac.navigation | LocalMap | 2 | 2 | 7 |
isaac.navigation | LocalizationEvaluation | 0 | 0 | 0 |
isaac.navigation | LocalizationMonitor | 1 | 0 | 8 |
isaac.navigation | LocalizeBehavior | 0 | 0 | 8 |
isaac.navigation | MapWaypointAsGoal | 1 | 1 | 2 |
isaac.navigation | MapWaypointAsGoalSimulator | 1 | 0 | 3 |
isaac.navigation | MapWaypointsAsPlan | 0 | 1 | 3 |
isaac.navigation | MoveAndScan | 1 | 1 | 1 |
isaac.navigation | MoveUntilArrival | 2 | 0 | 3 |
isaac.navigation | NavigationMap | 0 | 0 | 4 |
isaac.navigation | NavigationMonitor | 1 | 1 | 5 |
isaac.navigation | OccupancyMapCleanup | 2 | 1 | 3 |
isaac.navigation | OccupancyToBinaryMap | 2 | 1 | 3 |
isaac.navigation | ParticleFilterLocalization | 2 | 1 | 10 |
isaac.navigation | ParticleSwarmLocalization | 1 | 0 | 5 |
isaac.navigation | PoseAsGoal | 0 | 1 | 4 |
isaac.navigation | PoseHeatmapGenerator | 1 | 1 | 4 |
isaac.navigation | RandomMapPoseSampler | 0 | 0 | 2 |
isaac.navigation | RandomWalk | 1 | 1 | 2 |
isaac.navigation | RangeScanModelClassic | 0 | 0 | 5 |
isaac.navigation | RangeScanModelFlatloc | 0 | 0 | 7 |
isaac.navigation | RangeScanToObservationMap | 1 | 2 | 6 |
isaac.navigation | RobotPoseGenerator | 0 | 0 | 4 |
isaac.navigation | RobotRemoteControl | 2 | 1 | 8 |
isaac.navigation | RobotViewer | 1 | 0 | 8 |
isaac.navigation | TravellingSalesman | 0 | 1 | 5 |
isaac.navigation | VirtualGamepadBridge | 1 | 2 | 3 |
isaac.navsim | ScenarioManager | 1 | 2 | 6 |
isaac.navsim | ScenarioMonitor | 4 | 1 | 9 |
isaac.object_pose_estimation | CodebookLookup | 1 | 2 | 2 |
isaac.object_pose_estimation | CodebookPoseSampler | 0 | 1 | 12 |
isaac.object_pose_estimation | CodebookWriter | 2 | 1 | 0 |
isaac.object_pose_estimation | ImagePoseEncoder | 3 | 1 | 0 |
isaac.object_pose_estimation | PoseEstimation | 4 | 1 | 0 |
isaac.orb | ExtractAndVisualizeOrb | 1 | 0 | 5 |
isaac.perception | AprilTagsDetection | 1 | 1 | 3 |
isaac.perception | BirdViewProjection | 3 | 2 | 1 |
isaac.perception | CropAndDownsample | 1 | 1 | 4 |
isaac.perception | CropAndDownsampleCuda | 1 | 1 | 3 |
isaac.perception | DisparityToDepth | 2 | 1 | 0 |
isaac.perception | FiducialAsGoal | 1 | 2 | 6 |
isaac.perception | ImageWarp | 1 | 1 | 5 |
isaac.perception | PointCloudAccumulator | 1 | 1 | 1 |
isaac.perception | RangeScanFlattening | 1 | 1 | 5 |
isaac.perception | RangeToPointCloud | 1 | 1 | 4 |
isaac.perception | ScanAccumulator | 1 | 1 | 3 |
isaac.perception | StereoDisparityNet | 2 | 1 | 3 |
isaac.perception | StereoImageSplitting | 1 | 2 | 9 |
isaac.planner | DifferentialBaseControl | 1 | 1 | 9 |
isaac.planner | DifferentialBaseLqrPlanner | 2 | 1 | 34 |
isaac.planner | DifferentialBaseModel | 0 | 0 | 3 |
isaac.planner | GlobalPlanSmoother | 1 | 1 | 9 |
isaac.planner | GlobalPlanner | 2 | 1 | 16 |
isaac.planner | HolonomicBaseControl | 1 | 1 | 7 |
isaac.planner | SphericalRobotShapeComponent | 0 | 0 | 2 |
isaac.pwm | PwmController | 2 | 0 | 2 |
isaac.rgbd_processing | DepthEdges | 1 | 1 | 4 |
isaac.rgbd_processing | DepthImageFlattening | 1 | 1 | 12 |
isaac.rgbd_processing | DepthImageToPointCloud | 2 | 1 | 1 |
isaac.rgbd_processing | DepthNormals | 2 | 1 | 2 |
isaac.rgbd_processing | DepthPoints | 1 | 1 | 1 |
isaac.rgbd_processing | FreespaceFromDepth | 1 | 1 | 15 |
isaac.rl | TemporalBatching | 1 | 2 | 3 |
isaac.ros_bridge | CameraImageToRos | 0 | 0 | 1 |
isaac.ros_bridge | CameraInfoToRos | 0 | 0 | 1 |
isaac.ros_bridge | FlatscanToRos | 0 | 0 | 1 |
isaac.ros_bridge | GoalToRos | 0 | 0 | 2 |
isaac.ros_bridge | GoalToRosAction | 2 | 1 | 5 |
isaac.ros_bridge | OdometryToRos | 0 | 0 | 2 |
isaac.ros_bridge | PosesToRos | 0 | 0 | 2 |
isaac.ros_bridge | RosNode | 0 | 0 | 1 |
isaac.ros_bridge | RosToDifferentialBaseCommand | 0 | 0 | 0 |
isaac.ros_bridge | RosToPoses | 0 | 0 | 2 |
isaac.sight | AliceSight | 0 | 0 | 0 |
isaac.sight | SightWidget | 0 | 0 | 6 |
isaac.sight | WebsightServer | 0 | 0 | 6 |
isaac.skeleton_pose_estimation | OpenPoseDecoder | 3 | 1 | 14 |
isaac.stereo_depth | CoarseToFineStereoDepth | 2 | 1 | 3 |
isaac.superpixels | RgbdSuperpixelCostMap | 2 | 2 | 6 |
isaac.superpixels | RgbdSuperpixels | 5 | 1 | 18 |
isaac.superpixels | SuperpixelImageLabeling | 2 | 1 | 1 |
isaac.utils | DetectionUnprojection | 2 | 1 | 3 |
isaac.utils | DetectionsToPoseTree | 1 | 0 | 2 |
isaac.utils | DifferentialTrajectoryToPlanConverter | 1 | 1 | 1 |
isaac.utils | FlatscanToPointCloud | 1 | 1 | 0 |
isaac.utils | Plan2Converter | 1 | 1 | 1 |
isaac.utils | Pose2GaussianDistributionEstimation | 1 | 1 | 2 |
isaac.utils | PoseMonitor | 0 | 1 | 2 |
isaac.utils | PoseTreeFeed | 0 | 1 | 0 |
isaac.utils | RigidBodiesToDetections | 1 | 1 | 1 |
isaac.utils | SendTextMessages | 0 | 1 | 2 |
isaac.utils | WaitUntilDetection | 1 | 0 | 1 |
isaac.velodyne_lidar | VelodyneLidar | 0 | 1 | 3 |
isaac.viewers | BinaryMapViewer | 2 | 0 | 2 |
isaac.viewers | ColorCameraViewer | 1 | 0 | 4 |
isaac.viewers | DepthCameraViewer | 1 | 0 | 7 |
isaac.viewers | Detections3Viewer | 1 | 0 | 6 |
isaac.viewers | DetectionsViewer | 1 | 0 | 8 |
isaac.viewers | FiducialsViewer | 1 | 0 | 2 |
isaac.viewers | FlatscanViewer | 1 | 0 | 4 |
isaac.viewers | GoalViewer | 1 | 0 | 2 |
isaac.viewers | MosaicViewer | 0 | 0 | 4 |
isaac.viewers | OccupancyMapViewer | 2 | 0 | 1 |
isaac.viewers | PointCloudViewer | 1 | 0 | 4 |
isaac.viewers | SegmentationCameraViewer | 1 | 0 | 3 |
isaac.viewers | SegmentationViewer | 2 | 0 | 6 |
isaac.viewers | SkeletonViewer | 1 | 0 | 2 |
isaac.viewers | TensorViewer | 1 | 1 | 6 |
isaac.viewers | TrajectoryListViewer | 1 | 0 | 1 |
isaac.ydlidar | YdLidar | 0 | 1 | 1 |
isaac.yolo | YoloTensorRTInference | 1 | 2 | 1 |
isaac.zed | ZedImuReader | 0 | 2 | 2 |
isaac.AdafruitNeoPixelLedStrip
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- led_strip [LedStripProto]: The desired LED strip configuration message
- Outgoing messages
Parameters
- bus [int] [default=1]: The I2C bus of the LED strip. Default value is 1
isaac.ArgusCsiCamera
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- image [ColorCameraProto]: Channel to broad cast image extracted from argus feed
Parameters
- mode [int32_t] [default=]: Resolution mode of the camera. Supported values are:
0: 2592 x 1944, 1: 2592 x 1458, 2: 1280 x 720 - camera_id [int32_t] [default=]: System device numeral for the camera. For example select 0 for /dev/video0.
- framerate [int32_t] [default=]: desired framerate
- focal_length [Vector2d] [default=]: Focal length of the camera in pixels
- optical_center [Vector2d] [default=]: Optical center in pixels
isaac.ImageComparison
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image_a [ColorCameraProto]: First input image
- input_image_b [ColorCameraProto]: Second input image
- Outgoing messages
Parameters
- correlation_threshold [float] [default=0.99]: The minimum correlation between two images where we will consider them the same
- down_scale_factor [int] [default=4]: Scaling of the displayed images in Sight
isaac.Joystick
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- js_state [JoystickStateProto]: The joystick message
Parameters
- deadzone [double] [default=0.05]: Size of the “deadzone” region, applied to both positive and negative values for each axis. For example, a deadzone of 0.05 will result in joystick readings in the range [-0.05, 0.05] being clamped to zero. Readings outside of this range are rescaled to fully cover [-1, 1]. In other words, the range [0.05, 1] is linearly mapped to [0, 1], and likewize for negative values.
- num_axes [int] [default=4]: Number of joystick axes (e.g., 4 axes might correspond to two 2-axis analogue sticks)
- num_buttons [int] [default=12]: Number of joystick buttons
- reconnect_interval [double] [default=1.0]: Reconnect interval, in seconds. This is the period between joystick connection attempts (i.e., attempts to open the joystick device file) when the initial attempt fails.
- input_timeout_interval [double] [default=0.1]: Input timeout interval, in seconds. This determines how long tick() will wait for input before giving up until tick() is called again. Note that stop() cannot succeed while tick() is waiting for input, so this timeout value should not be overly long.
- device [string] [default=”/dev/input/js0”]: Joystick device file (system-dependent)
- print_unsupported_buttons_warning [bool] [default=false]: Option controlling whether a warning will be logged when an event is received from an axis or button whose index exceeds num_axes or num_buttons, respectively
isaac.LivoxLidar
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- accumulated_point_cloud [PointCloudProto]: Output 3D point cloud samples. The point cloud is published when the point count reaches the configured minimum point count or when the time between message publishing is greater than the configured published interval.
Parameters
- device_ip [string] [default=”0.0.0.0”]: The IP address of the lidar device we want to connect to and receive data from. This parameter is changeable at configuration time.
- port_command [int] [default=50001]: The UDP port to send commands to the lidar. This parameter is changeable at configuration time.
- port_data [int] [default=50002]: The UDP port from which the data samples will be received. This parameter is changeable at configuration time.
- batch_count [int] [default=10]: Minimum number of accumulated point batches before publishing the point cloud. It can be configured and changed at runtime. The point cloud is published when the point count reaches the configured point batch count. Each batch is 100 data points per Livox communication protocol.
isaac.PanTiltDriver
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [StateProto]: Current command for pan/tilt unit
Outgoing messages
- state [StateProto]: The state of the pan tilt unit
- motors [DynamixelMotorsProto]: State of Dynamixel motors
Parameters
- use_speed_control [bool] [default=true]: If set to true dynamixels are controlled in speed mode, otherwise they are controlled in position mode
- usb_port [string] [default=”/dev/ttyUSB0”]: USB port used to connect to the bus (A2D2 USB adapter)
- pan_servo_id [int] [default=1]: Dynamixel ID for pan servo
- tilt_servo_id [int] [default=2]: Dynamixel ID for tilt servo
- tilt_min [double] [default=-0.5]: Minimum value valid for tilt
- tilt_max [double] [default=2.0]: Maximum value valid for tilt
- pan_min [double] [default=-Pi<double>]: Minimum value valid for pan
- pan_max [double] [default=Pi<double>]: Maximum value valid for pan
- pan_offset [double] [default=5.83]: Constant offset in the pan angle such as pan = 0 has the end effector looking forward (X axis)
- tilt_offset [double] [default=4.73]: Constant offset in the tilt angle such as tilt = 0 has the end effector looking horrizontally
- baudrate [int] [default=static_cast<int>(dynamixel::Baudrate::k1M)]: Baudrate of the Dynamixel bus. See packages/dynamixel/gems/registers.hpp for options. TODO Remove when refactored to use DynamixelDriver class
- model [int] [default=static_cast<int>(dynamixel::Model::XM430)]: What kind of dynamixel model it is: (AX12A = 0, XM430 = 1, MX12W = 2) TODO(jberling) refactor pan tilt to use DynamixelDriver class and switch to enum
- pan_joint_frame [string] [default=”pan”]: Name of the pan joint frame. The edge pan_in_T_pan_out will be added to the PoseTree.
- tilt_joint_frame [string] [default=”tilt”]: Name of the tilt joint frame. The edge tilt_in_T_tilt_out will be added to the PoseTree.
isaac.RealsenseCamera
Description
RealsenseCamera is an Isaac codelet for the Realsense D435 camera that provides color and depth images. The sensor can also provide raw IR images, however this is currently not supported.
You can change the resolution of the camera via various configuration parameters. However only certain modes are supported:
- 1280x720 (at most 30 Hz)
- 848x480
- 640x480
- 640x360
- 424x240
Valid framerates for the color image are 60, 30, 15, 6 fps. Valid framerate for the depth image are 90, 60, 30, 15, 6 fps. The camera can also produce images at a 1080p resolution. However this is currently not supported as color and depth are set to the same resolution.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- color [ColorCameraProto]: A color camera image that can be Image3ub(for color) or Image1ui16 (for grayscale.)
- depth [DepthCameraProto]: Depth image (in meters). This is in left Ir camera frame
Parameters
- rows [int] [default=360]: The vertical resolution for both color and depth image.
- cols [int] [default=640]: The horizontal resolution for both color and depth image.
- rgb_framerate [int] [default=30]: The framerate of color image acquisition.
- depth_framerate [int] [default=30]: The framerate of depth image acquisition.
- align_to_color [bool] [default=true]: If enabled, the depth image is spatially aligned to the color image to provide matching color and depth values for every pixel. This is a CPU-intensive process and can reduce frame rates.
- frame_queue_size [int] [default=2]: Max number of frames you can hold at a given time. Increasing this number reduces frame drops but increase latency, and vice versa; ranges from 0 to 32.
- auto_exposure_priority [bool] [default=false]: Limit exposure time when auto-exposure is ON to preserve constant fps rate.
- laser_power [int] [default=150]: Amount of power used by the depth laser, in mW. Valid ranges are between 0 and 360, in increments of 30.
- enable_auto_exposure [bool] [default=true]: Enable auto exposure, disabling can reduce motion blur
- dev_index [int] [default=0]: The index of the Realsense device in the list of devices detected. This indexing is dependent on the order the Realsense library detects the cameras, and may vary based on mounting order. By default the first camera device in the list is chosen. This camera choice can be overridden by the serial number parameter below.
- serial_number [string] [default=”“]: An alternative way to specify the desired device in a multicamera setup. The serial number of the Realsense camera can be found printed on the device. If specified, this parameter will take precedence over the dev_index paramter above.
isaac.SegwayRmpDriver
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segway_cmd [StateProto]: Linear and angular speed command for driving segway (navigation::DifferentialBaseControl type)
Outgoing messages
- segway_state [StateProto]: State of the segway consisting of linear and angular speeds and accelerations (DifferentialBaseDynamics)
Parameters
- ip [string] [default=”192.168.0.40”]: Isaac will use this IP to talk to segway
- port [int] [default=8080]: Isaac will use this port to talk to segway
- flip_orientation [bool] [default=true]: If true, segway’s forward direction will be flipped
- speed_limit_linear [double] [default=1.1]: Maximum linear speed segway is allowed to travel with
- speed_limit_angular [double] [default=1.0]: Maximum angular speed segway is allowed to rotate with
isaac.SerialBMI160
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- imu [ImuProto]: IMU data including linear accelerations and angular velocities
Parameters
- device [string] [default=”/dev/ttyUSB0”]: Dev path where for the IMU device
isaac.SimpleLed
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- led_strip [LedStripProto]: The outgoing LED strip message for the driver to display
- Parameters
isaac.SlackBot
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- slack_message [ChatMessageProto]: Messages to be sent to the slack server
Outgoing messages
- user_instruction [ChatMessageProto]: Messages received from the slack server
Parameters
- bot_token [string] [default=]: Slack bot token given on the slack app config page. A token can only be used by one Slackbot. Multiple robots on same token is not supported.
- slack_connect_url [string] [default=”https://slack.com/api/rtm.connect”]: Slack URL we will be sending the connection request too
isaac.StereoVisualOdometry
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- left [ColorCameraProto]: Images should be rectified prior to being passed in here. Gray input left image
- right [ColorCameraProto]: Gray input right image
- extrinsics [Pose3dProto]: camera pair extrinsics
- imu [ImuProto]: IMU readings
- imu_T_left_camera [Pose3dProto]: IMU to left camera transformation It contains rotation and translation between the IMU and left camera frames
Outgoing messages
- left_camera_pose [Pose3dProto]: The 6 DOF pose of the left camera. The pose is not published if the tracker is lost.
Parameters
- denoise_input_images [bool] [default=false]: Enable image denoising. Disable if the input images have already passed through a denoising filter.
- horizontal_stereo_camera [bool] [default=true]: Enable fast and robust left-to-right tracking for rectified cameras with principal points on the horizontal line.
- process_imu_readings [bool] [default=true]: Enable IMU data acquisition and integration
- num_points [int] [default=100]: number of points to include in the pose trail debug visualization
- gravitational_force [Vector3d] [default=Vector3d(0.0, -9.80665, 0.0)]: The gravitational force vector
isaac.TorchInferenceTestDisplayOutput
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- test_output [TensorProto]: Receives tensor output from the Torch inference
- Outgoing messages
- Parameters
isaac.TorchInferenceTestSendInput
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- test_input [TensorProto]: Sends tensors as input to Torch inference
Parameters
- input_value [float] [default=]:
isaac.V4L2Camera
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- frame [ColorCameraProto]: Each frame output by the camera
Parameters
- device_id [int32_t] [default=0]: Which camera should be opened
- rows [int32_t] [default=720]: Parameters of the image requested by the camera These must match exactly with what the camera is able to produce. Number of pixels in the height dimension
- cols [int32_t] [default=1280]: Number of pixels in the width dimension
- rate_hz [int32_t] [default=30]: Frames per second.
- hardware_image_queue_length [int32_t] [default=3]: Buffers are queued with the V4L2 driver so that the driver can write out images at the specified frame rate without delays. This may be changed by the camera when we are initializing.
- focal_length [Vector2d] [default=(Vector2d{700.0, 700.0})]: Focal length (in pixels) for the pinhole camera model
- optical_center [Vector2d] [default=(Vector2d{360.0, 640.0})]: Optical center of the projection for the pinhole camera model
- brightness [int32_t] [default=]: Adjustable camera parameters. v4l2-ctl can be used to check values, e.g., “v4l2-ctl –device=/dev/video0 –list-ctrls”. Descriptions below are taken from video4linux API documentation. Picture brightness, or more precisely, the black level
- contrast [int32_t] [default=]: Picture contrast or luma gain
- saturation [int32_t] [default=]: Picture color saturation or chroma gain
- gain [int32_t] [default=]: Gain control
- white_balance_temperature_auto [bool] [default=]: If true, white balance temprature will be automatically adjusted.
- white_balance_temperature [int32_t] [default=]: This control specifies the white balance settings as a color temperature in Kelvin. White balance temperature needs to be between 2000 abd 6500. This parameter is inactive if white_balance_temperature_auto is true
- exposure_auto [int32_t] [default=]: Exposure time and/or iris aperture. 0: Automatic exposure time, automatic iris aperture. 1: Manual exposure time, manual iris. 2: Manual exposure time, auto iris. 3: Auto exposure time, manual iris.
- exposure_absolute [int32_t] [default=]: Determines the exposure time of the camera sensor. The exposure time is limited by the frame interval. Drivers should interpret the values as 100 mus units, where the value 1 stands for 1/10000th of a second, 10000 for 1 second and 100000 for 10 seconds.
- use_cuda_color_conversion [bool] [default=true]: Whether to convert from yuyv to RGB using CUDA, otherwise the CPU is for the conversion.
isaac.Vicon
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- vicon_pose_tree [PoseTreeProto]: Pose tree message containing information from Vicon scene volume
- vicon_markers [MarkerListProto]: Marker list message containing all markers visible in Vicon scene volume
Parameters
- vicon_hostname [string] [default=”localhost”]: Hostname of the Vicon system
- vicon_port [string] [default=”801”]: Port to which the Vicon data is streaming
- reconnect_interval [double] [default=1.0]: Amount of time to wait before attempting to reconnect to the Vicon system
isaac.ZedCamera
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- left_camera_rgb [ColorCameraProto]: left rgb image and camera intrinsics
- right_camera_rgb [ColorCameraProto]: right rgb image and camera intrinsics
- left_camera_gray [ColorCameraProto]: left gray image and camera intrinsics
- right_camera_gray [ColorCameraProto]: right gray rgb image and camera intrinsics
- extrinsics [Pose3dProto]: camera pair extrinsics (right-to-left)
Parameters
- auto_white_balance [bool] [default=true]: Automatic white balance control
- brightness [int] [default=4]: Brightness level. Valid values are between 0 and 8.
- resolution [sl::RESOLUTION] [default=sl::RESOLUTION_VGA]: The resolution to use for the ZED camera. The following values can be set:
- RESOLUTION_HD2K: 2208x1242
- RESOLUTION_HD1080: 1920x1080
- RESOLUTION_HD720: 1280x720
- RESOLUTION_VGA: 672x376
- camera_fps [int] [default=60]: The image frame rate for the ZED camera. If set to 0, the highest FPS for the specified
resolution
will be used. The following are supported resolution/framerate combinations:- RESOLUTION_HD2K (2208*1242): 15 fps
- RESOLUTION_HD1080 (1920*1080): 15, 30 fps
- RESOLUTION_HD720 (1280*720): 15, 30, 60 fps
- RESOLUTION_VGA (672*376): 15, 30, 60, 100 fps
camera_fps
is unsupported, the closest available FPS will be used. ZED Camera FPS is not tied to a codelet tick rate as the camera has an independent on-board CPU.
- color_temperature [int] [default=2800]: The color temperature control. Valid values are between 2800 and 6500 with a step of 100.
- contrast [int] [default=4]: Contrast level. Valid values are between 0 and 8.
- exposure [int] [default=50]: Exposure control. Valid values are between 0 and 100. The exposure time is interpolated linearly between 0.17072ms and the max time for a specific frame rate. The following are max times for common framerates:
- 15fps setExposure(100) -> 19.97ms
- 30fps setExposure(100) -> 19.97ms
- 60fps setExposure(100) -> 10.84072ms
- 100fps setExposure(100) -> 10.106624ms
- gain [int] [default=50]: Gain control. Valid values are between 0 and 100.
- device_id [int] [default=0]: The numeral of the system video device of the ZED camera. For example for /dev/video0 choose 0.
- enable_imu [bool] [default=false]: Turns on capture and publication of IMU data that is only supported by ZED Mini camera hardware
- settings_folder_path [string] [default=”./”]: The folder path to the settings file (SN#####.conf) for the zed camera. This file contains the calibration parameters for the camera.
- gpu_id [int] [default=0]: The GPU device to be used for ZED CUDA operations
- gray_scale [bool] [default=false]: Turns on gray scale images
- rgb [bool] [default=true]: Turns on RGB color images
- enable_factory_rectification [bool] [default=true]: Turns on rectification of images inside ZED camera
isaac.alice.ChannelMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- channel [string] [default=]: The name of the channel to be monitored
- update_rate_on_tick [bool] [default=true]: If enabled rates will be updated during tick. If the tick period is high compared to the measured rate this will lead to jittering in the visualization.
isaac.alice.Config
Description
Stores node configuration in form of key-value pairs.
This component is added to every node by default and does not have to be added manually.
The config component is used by other components and the node itself to store structure and state. Most notable config can be used directly in codelets to access custom configuration values. Support for basic types and some math types is built-in. Configuration is stored in a group-key-value format. Each component and the node itself are defining separate groups of key-value pairs. Additionally custom groups of configuration can be added by the user.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
- Parameters
isaac.alice.Failsafe
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- name [string] [default=]: the name of the failsafe
isaac.alice.FailsafeHeartbeat
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- interval [double] [default=]: The expected heart beat interval (in seconds). This is the time duration for which the heartbeat will stay activated after a single activation. The heartbeat needs to be activated again within this time interval, otherwise the corresponding Failsafe will fail.
- failsafe_name [string] [default=]: The name of the failsafe to which this heartbeat is linked. This must be the same as the name parameter in the corresponding Failsafe component.
- heartbeat_name [string] [default=]: The name of this heartbeat. This is purely for informative purposes.
isaac.alice.JsonToProto
Description
Converts JSON messages into proto messages.
JSON messages must be published on the channel “json”. Note that the input channel does not appear in the normal list of channels due to how this codelet works internally.
Type ID must be set correctly otherwise conversion will fail.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- proto [MessageHeaderProto]: Publish proto messages in registered proto definition as specified by incoming json message proto id.
- Parameters
isaac.alice.MessageLedger
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- history [int] [default=10]: The maximum number of messages to hold in the history
isaac.alice.Pose
Description
Provides convenience functions to access 3D transformations from the application wide pose tree.
This component is added to every node by default and does not have to be added manually.
Poses use 64-bit floating point types and are 3-dimensional. All coordinate frames for the whole application are stored in a single central pose tree.
All functions below accept two coordinate frames: lhs and rhs. This refers to the pose lhs_T_rhs which is the relative transformations between these two coordinate frames. In particular the following equations hold:
Not all coordinate frames are connected. If this is the case or either of the two coordinate frames does not exist the pose is said to be “invalid”.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
- Parameters
isaac.alice.Pose2Comparer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- first_lhs_frame [string] [default=]: Name of the left hand side frame of the first pose
- first_rhs_frame [string] [default=]: Name of the right hand side frame of the first pose
- second_lhs_frame [string] [default=]: Name of the left hand side frame of the second pose
- second_rhs_frame [string] [default=]: Name of the right hand side frame of the second pose
- threshold [Vector2d] [default=]: This codelet reports success if this parameter is set and the relative difference between the two poses is less than this threshold in position and angle.
isaac.alice.PoseInitializer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
- pose [Pose3d] [default=]: Transformation lhs_T_rhs
- report_success [bool] [default=false]: If true reports success after initializing pose in the start function. This will make the attach_interactive_marker setting invalid because the codelet won’t tick.
- attach_interactive_marker [bool] [default=false]: If enabled the pose is editable via an interactive marker.
- add_yaw_degrees [double] [default=0.0]: Additional yaw angle around the Z axis in degrees. Currently only enabled if attach_interactive_marker is false.
- add_pitch_degrees [double] [default=0.0]: Additional pitch angle around the Y axis in degrees. Currently only enabled if attch_interactive_marker is false.
- add_roll_degrees [double] [default=0.0]: Additional roll angle around the X axis in degrees. Currently only enabled if attch_interactive_marker is false.
isaac.alice.PoseMessageInjector
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- pose [PoseTreeEdgeProto]: Incoming pose messages to inject into the pose tree
- Outgoing messages
- Parameters
isaac.alice.PoseToMessage
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- pose [PoseTreeEdgeProto]: Outgoing pose message from pose tree
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
isaac.alice.PoseTree
Description
Provides convenience functions to access 3D transformations from the application wide pose tree.
This component is added to every node by default and does not have to be added manually.
Poses use 64-bit floating point types and are 3-dimensional. All coordinate frames for the whole application are stored in a single central pose tree.
All functions below accept two coordinate frames: lhs and rhs. This refers to the pose lhs_T_rhs which is the relative transformations between these two coordinate frames. In particular the following equations hold:
Not all coordinate frames are connected. If this is the case or either of the two coordinate frames does not exist the pose is said to be “invalid”.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
- Parameters
isaac.alice.ProtoToJson
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- json [nlohmann::json]: Publishes converted Json message
- Parameters
isaac.alice.PyCodelet
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- config [json] [default=nlohmann::json({})]: Parameter for getting Isaac parameters to pyCodelets. For details, see PybindPyCodelet.
isaac.alice.Random
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- seed [int] [default=0]: The seed used by the random engine. If use_random_seed is set to true, this seed will be ignored.
- use_random_seed [bool] [default=false]: Whether or not using the default seed or use a random seed that will change from one execution to another.
isaac.alice.Recorder
Description
Stores data for in a log file. This component can for example be used to write incoming messages to a log file. The messages can then be replayed using the Replay component.
In order to record a message channel setup an edge from the publishing component to the Recorder component. The source channel is the name of the channel under which the publishing component publishes the data. The target channel name on the Recorder component can be choosen freely. When data is replayed it will be published by the Replay component under that same channel name.
Warning: Please note that the log container format is not yet final and that breaking changes might occur in in the future.
The root directory used to log data is base_directory/exec_uuid/tag/… where both base_directory and tag are configuration parameters. exec_uuid is a UUID which changed for every execution of an app and is unique over all possible executions. If tag is the empty string the root log directory is just base_directory/exec_uuid/….
Multiple recorders can write to the same root log directory. In this case they share the same key-value database. However only one recorder is allowed per log series. This means if the same component/key channel is logged by two different recorders they can not write to the same log directory.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- base_directory [string] [default=”/tmp/isaac”]: The base directory used as part of the log directory (see class comment)
- tag [string] [default=”“]: A tag used as part of the log directory (see class comment)
- enabled [bool] [default=true]: Can be used to disable logging.
isaac.alice.Replay
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- cask_directory [string] [default=”“]: The cask directory used to replay data from
- replay_time_offset [int64_t] [default=0]: Time offset to start a replay from between a log
- use_recorded_message_time [bool] [default=false]: Decides whether to use recorded message pubtime and acqtime or replay current time as pubtime and synchronize the acqtime using the starting time of the replay.
- loop [bool] [default=false]: If this is enabled replay will start from the beginning when it was replayed
isaac.alice.ReplayBridge
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- request [nlohmann::json]: Request to replay node
Outgoing messages
- reply [nlohmann::json]: Reply from replay node
Parameters
- replay_component_name [string] [default=]: Replay component name in format node/component. Ex: replay/isaac.alice.Replay
isaac.alice.Scheduling
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- priority [int] [default=0]: Controls the relative priority of a codelet task within a timeslice window Used for periodic and event driven codelets. Higher values have higher priority
- slack [double] [default=0]: Controls how much variation in start time is allowed when executing a codelet Used for periodic and event driven codelets. The parameter unit is seconds
- deadline [double] [default=]: Set the expected time that the codelet will take to complete processing. If no value is specified periodic tasks will assume the period of the task and other tasks will assume there is no deadline. The parameter unit is seconds
- execution_group [string] [default=”“]: Sets the execution group for the codelet. Users can define groups in the scheduler configuration. If an execution_group is specified it overrides default behaviors.
If no value is specified it will attempt to use the default configuration The default configuration provided creates three groups
Note: tickBlocking spawns a worker thread for the blocking task which if executed in the WorkerGroup can interfere with worker thread execution due to OS scheduling. Removing the default groups could lead to instabilities if not careful.
isaac.alice.Sight
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
- Parameters
isaac.alice.Subgraph
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
- Parameters
isaac.alice.Subprocess
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- start_command [string] [default=]: The command to run on start
- stop_command [string] [default=]: The command to run on stop
isaac.alice.TcpPublisher
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- port [int] [default=]: The TCP port number used to wait for connections and to publish messages.
isaac.alice.TcpSubscriber
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- host [string] [default=]: The IP adress of the remote host from which messages will be received.
- port [int] [default=]: The TCP port number on which the remove host is publishing messages.
- reconnect_interval [double] [default=0.5]: If a connection to the remote can not be established or breaks we try to restablish the connection at this interval (in seconds).
- update_pubtime [bool] [default=true]: If set to true publish timestamp will be set when the message is received; otherwise the original publish timestamp issued by the remote will be used.
isaac.alice.Throttle
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- data_channel [string] [default=]: The name of the data channel to be throttled
- output_channel [string] [default=]: The name of the output data channel with throttled data
- minimum_interval [double] [default=0.0]: The minimal time period after which a message can be published again on the data channel.
- use_signal_channel [bool] [default=true]: If enabled the signal channel will define which incoming messages are passed on. This enables the parameters signal_channel and acqtime_tolerance.
- signal_channel [string] [default=]: The name of the signal channel used for throttling
- acqtime_tolerance [int] [default=]: The tolerance on the acqtime to match data and signal channels. If this parameter is not specified the latest available message on the data channel will be taken.
isaac.alice.TimeOffset
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- input_channel [string] [default=”input”]: The name of message channel which will have it’s time stamps changed.
- output_channel [string] [default=”output”]: The name of message channel with changed timestamps.
- acqtime_offset [int64_t] [default=0]: A time offset in nanoseconds which will be added to the acquisition time of incoming messages.
isaac.alice.TimeSynchronizer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
- Parameters
isaac.audio.AudioCapture
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- audio_capture [AudioDataProto]: Captured audio data packets and their configuration is published.
Parameters
- capture_card_name [string] [default=]: Audio device name as string. Keep empty for default selection.
- sample_rate [int] [default=16000]: Sample rate of the audio data
- num_channels [int] [default=6]: Number of channels present in audio data
- audio_frame_in_milliseconds [int] [default=100]: Time duration of one audio frame
- ticks_per_frame [int] [default=5]: Number of times to query ALSA inside 1 audio frame duration
isaac.audio.AudioEnergyCalculation
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- audio_packets [AudioDataProto]: Receive the multi-channeled audio packets for computing the energy.
Outgoing messages
- audio_energy [StateProto]: The average energy in dB per audio packet is published.
Parameters
- channel_indices [std::vector<int>] [default=]: Indices of the audio channels which are used for calculating the audio energy
- reference_energy [double] [default=0]: Reference energy in decibels (dB). The energy of the audio packet is computed w.r.t. this reference energy. This is usually the Acoustic Overload Point or maximum dB value mentioned in the specification sheet of the microphone.
isaac.audio.AudioFileLoader
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- audio_file_index [AudioFilePlaybackProto]: Index of the file from a pre-defined file list to be loaded.
Outgoing messages
- audio_data_publish [AudioDataProto]: Publish the audio data and its configuration from the requested file
Parameters
- pcm_filelist [std::vector<std::string>] [default=std::vector<std::string>()]: List of raw PCM audio files
- sample_rate [int] [default=16000]: Sample rate of the PCM audio files
- number_of_channels [int] [default=1]: Number of channels in the audio files
isaac.audio.AudioPlayback
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- audio_playback_input [AudioDataProto]: Receive the audio data to be played on the playback device.
- Outgoing messages
Parameters
- playback_card_name [string] [default=”“]: Audio device name as string. For default device selection keep unused.
isaac.audio.SaveAudioToFile
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- audio [AudioDataProto]: audio data input
- Outgoing messages
Parameters
- filepath [string] [default=”/tmp/audio-out-f32-16k.pcm”]: audio data will be saved to this file
- enable_audio_dump [bool] [default=true]: flag to enable or disable runtime data dumping
isaac.audio.SoundSourceLocalization
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- audio_packets [AudioDataProto]: Receive the multi-channeled audio packets for computing the direction.
Outgoing messages
- audio_angle [StateProto]: Azimuth angle of the dominant sound source with respect to the reference axis (measured anti-clockwise) is published.
Parameters
- audio_duration [float] [default=0.5f]: Duration (in seconds) of the audio data used for computation of the azimuth angle. The milliseconds equivalent of this value should be an integral multiple of the input audio duration in milliseconds.
- microphone_distance [float] [default=0.0f]: Distance between two diagonally opposite microphones on the microphone array.
- microphone_pairs [std::vector<Vector2i>] [default=]: Pairs of indices of the audio channels corresponding to microphone elements.
- reference_offset_angle [int] [default=0]: Angle of first diagonaly opposite microphone pair with respect to the reference axis.
isaac.audio.TensorToAudioDecoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensors [TensorProto]: Receives a tensor of dimension (1, x) where x is the number of audio samples.
Outgoing messages
- audio [AudioDataProto]: Send out audio packets
Parameters
- sample_rate [int] [default=22050]: Sample rate of the audio received
- num_channels [int] [default=1]: Number of channels in audio
isaac.audio.TextToMel
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- text [ChatMessageProto]: Receives input text string
Outgoing messages
- mel_spectrogram [TensorProto]: Sends Mel Spectrograms with dimension {1, 80, x} where x depends on text length
Parameters
- session_timeout_value [double] [default=25.0]: The session timeout value determines how long a streaming TextToMel session can run after which it is terminated. After termination, remaining part of existing message gets discarded and the next text string message will be processed normally
isaac.audio.VoiceCommandConstruction
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- keyword_probabilities [TensorProto]: Receive keyword probabilities (generally produced by tensorflow inference) as a 2D tensor. Only tensors with first dimension as 1 are accepted.
Outgoing messages
- detected_command [VoiceCommandDetectionProto]: Publish the detected command id and list of timestamps of the contributing keywords.
Parameters
- command_list [std::vector<std::string>] [default=]: User defined command list
- command_ids [std::vector<int>] [default=]: User defined command ids
- max_frames_allowed_after_keyword_detected [int] [default=]: Maximum number of frames to look for a defined command after the trigger keyword is detected
- probability_mean_window [int] [default=1]: Window size over which the keyword probability predictions are averaged.
- num_classes [int] [default=]: Model specific params present in metadata Number of classes
- classes [std::vector<std::string>] [default=]: List of classes in same order as that present in model output
- thresholds [std::vector<float>] [default=]: Probability thresholds per class
isaac.audio.VoiceCommandFeatureExtraction
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- audio_packets [AudioDataProto]: Receive audio packets to extract features
Outgoing messages
- feature_tensors [TensorProto]: Tensors of Extracted features
Parameters
- audio_channel_index [int] [default=0]: Index of the channel in multi-channel input data used to detect voice commands.
- minimum_time_between_inferences [float] [default=0.1]: Minimum time between two consecutive inferences
- sample_rate [int] [default=]: Model specific params Sample rate of the audio supported
- fft_length [int] [default=]: Length of Fourier transform window
- num_mels [int] [default=]: Number of mel bins to be extracted
- hop_size [int] [default=]: Stride for consecutive Fourier transform windows
- window_length [int] [default=]: Length of one audio frame which is used for keyword detection. This is the number of time frames after computing STFT with above params
isaac.deepstream.Pipeline
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- pipeline [string] [default=”videotestsrc ! video/x-raw]: The Deepstream/GStreamer media pipelines. Note the usage of an appsink element for the entry points to Isaac. Equivalently, the appsrc for exit. The name of the element becomes the channel name. For example: appsrc name=<RX CHANNEL NAME>. For pipeline syntax, please read the command manual for gst-launch-1.0 or read this page: https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html For supported capabilities, formats, memory models, and equivalent Isaac messages, please refer to the component detailed documentation.
isaac.detect_net.DetectNetDecoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bounding_boxes_tensor [TensorProto]: Tensor from DetectNetv2 inference. Contains bounding boxes per class per grid box and is of size (4*N, R, C), where N = number of classes, R = grid rows, C = grid cols.
- confidence_tensor [TensorProto]: Tensor from DetectNetv2 inference. Contains confidence values per object per grid box and is of size (N, R, C), where N = number of classes, R = grid rows, C = grid cols.
Outgoing messages
- detections [Detections2Proto]: Output detections with bounding box, label, and confidence
Parameters
- non_maximum_suppression_threshold [double] [default=0.6]: Non-maximum supression threshold. The greater this value is, the stricter the algorithm is when determining if two bounding boxes are detecting the same object.
- confidence_threshold [double] [default=0.6]: Confidence threshold of the detection. Decreasing this value allows less confident detections be considered.
- labels [std::vector<std::string>] [default=]: Names of the classes trained by the network. The order and length of this list must correspond to the order and length of the labels given during training.
- output_scale [Vector2d] [default=]: Output scale in [rows, cols] for the decoded bounding boxes output. For example, this could be the image resolution before downscaling to fit the network input tensor resolution.
- bounding_box_scale [double] [default=35.0]: Bounding box normalization for both X and Y dimensions. This value is set in the DetectNetv2 training specification.
- bounding_box_offset [double] [default=0.5]: Bounding box offset for both X and Y dimensions. This value is set in the DetectNetv2 training specification.
isaac.dynamixel.DynamixelDriver
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [StateProto]: The desired angular speeds for each motor
Outgoing messages
- state [StateProto]: The measured angular speeds for each motor
Parameters
- port [string] [default=”/dev/ttyUSB0”]: USB port where Dynamixel controller is located at. usb_port varies depending on the controller device, e.g., “/dev/ttyACM0” or “/dev/ttyUSB0”
- baudrate [Baudrate] [default=Baudrate::k1M]: Baud rate of the Dynamixel bus. This is the rate of information transfer.
- servo_model [Model] [default=Model::MX12W]: Model of servo (AX12A, XM430, MX12W, XC430)
- control_mode [DynamixelMode] [default=DynamixelMode::kVelocity]: If set to true dynamixels are controlled in speed mode, otherwise they are controlled in position mode
- servo_ids [std::vector<int>] [default=]: Unique identifier for Dynamixel servos. Each motor needs to be assigned a unique ID using the software provided by Dynamixel. This is a mandatory parameter.
- torque_limit [double] [default=1.0]: Servo maximum torque limit. Caps the amount of torque the servo will apply. 0.0 is no torque, 1.0 is max available torque
- max_speed [double] [default=6.0]: Maximum (absolute) angular speed for wheels
- command_timeout [double] [default=0.3]: Commands received that are older than command_timeout seconds will be ignored. Kaya will stop if no message is received for command_timeout seconds.
- debug_mode [bool] [default=false]: Enables debug mode in which all motors are driving with constant speed independent from incoming messages.
- debug_speed [double] [default=1.0]: If debug mode is enabled, all motors will rotate with this speed.
isaac.flatsim.DifferentialBasePhysics
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [ActuatorGroupProto]: Actuator commands for the wheels of a differential base
Outgoing messages
- bodies [RigidBody3GroupProto]: Resulting physics state of the differential base body
Parameters
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- wheel_acceleration_noise [double] [default=0.03]: Each step a random normal-distributed noise with the given sigma will be added to the desired wheel acceleration. The sigma will be scaled based on the time step and wheel speed.
- wheel_acceleration_noise_decay [double] [default=0.995]: The wheel acceleration noise is additive simulating a random walk. To keep the noise bounded around zero it is multiplied with a decay factor at every timestep.
- slippage_magnitude_range [Vector2d] [default=Vector2d(0.00, 0.05)]: A random friction value is applied which effectively reduces the effect of wheel speed on wheel distance driven. A friction value of 0 zero means full tranmission, while a friction value of 1 means full slippage. Slippage is computed randomly using a uniform distribution with the given minium and maximum value.
- slippage_duration_range [Vector2d] [default=Vector2d(0.50, 1.25)]: The slippage value is maintained constant for a certain duration and then changed to a new value. The duration of the slippage is computed using a uniform distribution with given minimum and maximum value.
- robot_init_pose_name [string] [default=”robot_init”]: Name of the pose in pose tree to use as the initial pose for robot
isaac.flatsim.DifferentialBaseSimulator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- diff_base_command [StateProto]: Input command message with desired body speed (should be of type: DifferentialBaseControl)
- physics_bodies [RigidBody3GroupProto]: Input state of the base rigid body as computed by physics
Outgoing messages
- physics_actuation [ActuatorGroupProto]: Output actuator message with desired accelerations for each wheel
- diff_base_state [StateProto]: Output state of differential base (DifferentialBaseDynamics)
Parameters
- max_wheel_acceleration [double] [default=10.0]: The maximum acceleration for a wheel
- power [double] [default=0.20]: How fast the base will accelerate towards the desired speed
- flip_left_wheel [bool] [default=false]: If this is enabled the direction of the left wheel will be flipped
- flip_right_wheel [bool] [default=false]: If this is enabled the direction of the right wheel will be flipped
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- joint_name_left_wheel [string] [default=”left_wheel”]: Name of the joint for left wheel
- joint_name_right_wheel [string] [default=”right_wheel”]: Name of the joint for right wheel
isaac.flatsim.FlatscanNoiser
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Recive a FlatScan proto: this is a list of beams (angle + distance)
Outgoing messages
- noisy_flatscan [FlatscanProto]: Output a noisy FlatScan proto: this is a list of beams (angle + distance) with
Parameters
- min_range [double] [default=0.25]: The minimum range at which obstacles are detected
- max_range [double] [default=50.0]: The maximum range of the simulated LIDAR
- range_sigma_rel [double] [default=0.001]: Standard deviation of relative range error
- range_sigma_abs [double] [default=0.03]: Standard deviation of absolute range error
- beam_invalid_probability [double] [default=0.05]: Probability that a beam will be simulated as invalid
- beam_random_probability [double] [default=0.00001]: Probability that a beam will return a random range
- beam_short_probability [double] [default=0.03]: Probability that a beam will return a smaller range
isaac.flatsim.HolonomicBaseSimulator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [StateProto]: Input command message with desired body speed
- physics_bodies [RigidBody3GroupProto]: Input state of the base rigid body as computed by physics
Outgoing messages
- physics_actuation [ActuatorGroupProto]: Output actuator message with desired accelerations for each wheel
- state [StateProto]: Output state of holonomic base
Parameters
- wheel_base_length [double] [default=0.125]: Distance of each wheel from robot center of mass
- wheel_radius [double] [default=0.041319]: Wheel radius
- max_safe_speed [double] [default=0.3]: Max safe speed
- max_angular_speed [double] [default=4.0]: Max turning rate
- max_wheel_acceleration [double] [default=10.0]: The maximum acceleration for a wheel
- power [double] [default=0.20]: How fast the base will accelerate towards the desired speed
- flip_back_wheel [bool] [default=false]: If this is enabled the direction of the back wheel will be flipped
- flip_front_left_wheel [bool] [default=false]: If this is enabled the direction of the front left wheel will be flipped
- flip_front_right_wheel [bool] [default=false]: If this is enabled the direction of the front right wheel will be flipped
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- joint_name_back_wheel [string] [default=”axle_0_joint”]: Name of the joint for back wheel
- joint_name_front_left_wheel [string] [default=”axle_1_joint”]: Name of the joint for front left wheel
- joint_name_front_right_wheel [string] [default=”axle_2_joint”]: Name of the joint for front right wheel
isaac.flatsim.SimRangeScan
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- flatscan [FlatscanProto]: Output a FlatScan proto: this is a list of beams (angle + distance)
Parameters
- num_beams [int] [default=360]: The number of beams in the range scan
- min_range [double] [default=0.25]: The minimum range at which obstacles are detected
- max_range [double] [default=50.0]: The maximum range of the simulated LIDAR
- min_angle [double] [default=0.0]: The min angle of simulated beams
- max_angle [double] [default=TwoPi<double>]: The max angle of simulated beams
- map [string] [default=”map”]: Map node to use for tracing range scans
- lidar_frame [string] [default=”lidar_gt”]: Name of the frame of the simulated LiDAR sensor
isaac.fuzzy.EfllFuzzyEngineExample
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
- Parameters
isaac.fuzzy.LfllFuzzyEngineExample
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
- Parameters
isaac.gtc_china.PanTiltGoto
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- current_state [StateProto]: Input containing the current pan-tilt state
Outgoing messages
- target_state [StateProto]: Output state proto containing target pan and tilt angle information
Parameters
- target_pan_angle [double] [default=0.0]: Target pan angle for the camera
- target_tilt_angle [double] [default=0.0]: Target tilt angle for the camera
- tolerance [double] [default=0.05]: Parameter which defines at least how close the current pan and tilt angles have to be to the target angles to be considered as having reached the target. If the absolute differences between the current and target pan and tilt angles are lesser than or equal to this value, we consider that the pan-tilt unit has reached the required angles.
isaac.hgmm.HgmmPointCloudMatching
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- cloud [PointCloudProto]: Takes as input point clouds from sensor like Lidar or Depth Camera
- Outgoing messages
Parameters
- levels [int] [default=2]: The number of levels to build the HGMM tree. This number depends on the complexity of the scene geometry and the number of points in the point clouds. Typically, 2 works well for simple scenes/point clouds, though 3 empirically works better for denser point clouds like velodyne-32 or more. The higher the level, the more accurate the registration, but divergence becomes more probable. (the model overfits and becomes unstable.) 4+ typically reserved for high-fidelity 3d reconstructions, not 6 dof registration
- convergence_threshold [float] [default=0.001]: The lower, the longer the algorithm takes to converge, but performance becomes better. 0.01: fast to converge but worse accuracy 0.001-0.0001: slow to converge but often better accuracy
- max_iterations [int] [default=30]: Max iterations regardless of convergence. Most problems take on the order of 10-35 iterations per level for normal convergence tolerance ranges.
- noise_floor [float] [default=0.000]: TODO Noise parameter (currently turned off). Used if data contains extreme outliers. In the meantime, basic filtering of input needs to be performed outside of HGMM model creation and registration
- zero_x_y_minimal_z [float] [default=1]: Minimal z-coordinate for valid points with zero x-y coordinates. Used to drop invalid points from erronous source
- regularization [float] [default=0.01]: Regularization to prevent singularities and overfitting If solution is diverging, parameter is too low. 0.0001: highly accurate but often unstable 0.001: highly accurate but possible divergence 0.01: robust convergence but higher error 0.1: very robust but possibly biased result
- axis_length [double] [default=1.0]: Ego frame axis length
- skip [int] [default=51]: Skipping points to reduce overload of visualization
- history_size [int] [default=10]: Keeps past several history point clouds for visualization
- max_distance [double] [default=10.0]: Visualizes no points beyond the distance
isaac.imu.IioBmi160
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- imu_raw [ImuProto]: ImuProto is used to publish IMU data read from the buffer
Parameters
- i2c_device_id [int] [default=1]: I2C device ID: matches ID of /dev/i2c-X
- imu_T_imu [SO3d] [default=SO3d::FromAxisAngle(Vector3d{1, 0, 0}, Pi<double>)]: IMU Mounting Pose In the base case, the IMU is mounted on it’s back. Rotate 180 degrees about X-axis (flip Y and Z axes)
isaac.imu.ImuCalibration2D
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- imu [ImuProto]: Imu Data
- Outgoing messages
Parameters
- imu_calibration_file [string] [default=”imu_calibration.out.json”]: path to output calibration file. This file will be created if it does not exist and overwritten if it exists.
- imu_variance_stationary [double] [default=0.2]: Threshold for stationary variance
- imu_window_length [int] [default=100]: Number of samples in window
isaac.imu.ImuCorrector
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- raw [ImuProto]: Receive raw IMU data
Outgoing messages
- corrected [ImuProto]: Publish corrected IMU data
Parameters
- calibration_file [string] [default=]: Optional calibration file. If a calibration file is provided, biases from the file will be removed from the IMU data. Otherwise we will calibrate in the beginning.
- calibration_variance_stationary [double] [default=0.1]: Stationary variance for calibration
- calibration_window_length [int] [default=100]: Number of samples in window for calibration
isaac.imu.ImuSim
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bodies [RigidBody3GroupProto]: Input states of the rigid bodies as computed by physics engine
Outgoing messages
- imu_raw [ImuProto]: Imu proto is used to publish raw Imu data received from simulator
Parameters
- imu_name [string] [default=”imu”]: Name of the IMU rigid body This param is required and should match the config file for the sim
- gravity_norm [double] [default=9.80665]: Imu specific parameters Norm of local gravitational constant
- sampling_rate [double] [default=30.0]: Sampling Frequency
- accel_bias [Vector3d] [default=Vector3d::Zero()]: Accelerometer Bias
- accel_noise [Vector3d] [default=Vector3d::Zero()]: Accelerometer (zero mean) noise std dev
- gyro_bias [Vector3d] [default=Vector3d::Zero()]: Gyroscope Bias
- gyro_noise [Vector3d] [default=Vector3d::Zero()]: Gyroscope (zero mean) noise std dev
isaac.json.JsonMockup
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- json [JsonProto]: Output JsonProto message
Parameters
- json_mock [json] [default=]: The JSON to publish
isaac.json.JsonReplay
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- json_proto [JsonProto]: JSON message read from the file
Parameters
- jsonfile_path [string] [default=”/tmp/input.jsonl”]: Path to JSON file
isaac.json.JsonWriter
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- json [JsonProto]: Incoming JSON message to write that is received as a proto
- raw_json [Json]: Incoming JSON message to write that is received as a raw message
- Outgoing messages
Parameters
- filename [string] [default=]: Path to write file. If the file already exists, it will be overwritten by this codelet.
- indent [int] [default=-1]: Sets the indent of nlohmann::basic_json::dump(). Leave as -1 for the compact representation with no newlines. Set to positive for newlines with indent level.
isaac.kaya.KayaBaseDriver
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- base_command [StateProto]: The desired motion of Kaya of type messages::HolonomicBaseControls
- wheel_state [StateProto]: The measured angular speeds for each wheel received from the motor driver. The order of wheels is the same as for the wheel_command channel.
Outgoing messages
- base_state [StateProto]: The state of Kaya of type messages::HolonomicBaseDynamics
- wheel_command [StateProto]: The desired angular speeds for each wheel to be sent to the motor driver. The order of wheels in the message is: front right, front left, back
Parameters
- wheel_base_length [double] [default=0.125]: Distance of the wheel center to the robot center of rotation
- wheel_radius [double] [default=0.04]: The radius of Kaya wheels
- max_linear_speed [double] [default=0.3]: Maximum allowed linear speed
- max_angular_speed [double] [default=0.5]: Maximum allowed angular speed
isaac.kinova_jaco.KinovaJaco
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- cartesian_pose_command [StateProto]: Command for end effector position and orientation
- joint_velocity_command [StateProto]: Command for angular velocities for joints
Outgoing messages
- cartesian_pose [StateProto]: Current position and orientation of end effector
- joint_position [StateProto]: Current angle, in Radians, for each joint (7-dof)
- joint_velocity [StateProto]: Current angular velocity, in Radians/sec, for each joint (7-dof)
- finger_position [StateProto]: Current position for each finger
Parameters
- kinova_jaco_sdk_path [string] [default=]: Path to JacoSDK is set in jaco_driver_config.json. Driver is tested for use with JACO2SDK v1.4.2 Jaco SDK source: https://drive.google.com/file/d/17_jLW5EWX9j3aY3NGiBps7r77U2L64S_/view
- control_mode [ControlMode] [default=kCartesianPose]: Set control mode for arm. Can only accept commands corresponding to the current mode.
isaac.lidar_slam.Cartographer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Cartographer uses a 2D LIDAR scan to build the map
- Outgoing messages
Parameters
- lua_configuration_directory [string] [default=”“]: Folders to search for Cartographer lua scripts, separated by comma
- lua_configuration_basename [string] [default=”“]: File name of the specific Cartographer lua script to load
- output_path [string] [default=”/tmp”]: Folder to write submaps and generated map
- background_size [Vector2i] [default=Vector2i(1500, 1500)]: The size of the canvas for visualizing the map in sight (in grid cells)
- background_translation [Vector2d] [default=Vector2d(-75, -75)]: Translation to apply on background image (in meters)
- num_visible_submaps [int] [default=8]: Only the most recent submaps are visualized with sight for performance reasons.
isaac.lidar_slam.GMapping
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: GMapping uses a 2D LIDAR scan to build the map
- odometry [Odometry2Proto]: Odometry can either be read from this message or from the pose tree. The message is only used if the parameter use_pose_tree is set to false, otherwise odometry is read from the pose tree.
- Outgoing messages
Parameters
- file_path [string] [default=”/tmp”]: Directory path used to save map snapshots
- build_map_period [double] [default=2.0]: How often the map is recomputed, in seconds
- laser_matcher_resolution [double] [default=DegToRad(3.0)]: Resolution to be used in scan matcher angles
- map_x_max [double] [default=100.]: Maximum x value of the initial map
- map_y_max [double] [default=100.]: Maximum y value of the initial map
- map_x_min [double] [default=-100.]: Minimum x value of the initial map
- map_y_min [double] [default=-100.]: Minimum y value of the initial map
- map_resolution [double] [default=0.1]: Distance between each pixel in the map
- max_range [double] [default=32.0]: The maximum range of the lidar. This value should be close to the physical range of the lidar to exploit as much of the available information. This value must not be larger than the range of lidar for GMapping to operate.
- map_update_range [double] [default=30.0]: The range within which the map is updated. The update range must be smaller or equal to the maximum range parameter as it relies on the lidar range information. The value chosen allows the tradeoff between the map global consistency and its sharpness.
- number_particles [int] [default=40]: Number of particles used to estimate position of the robot
- linear_distance [double] [default=0.3]: Linear threshold used to attempt scan matching
- angular_distance [double] [default=0.1]: Angular threshold used to attempt scan matching
- use_pose_tree [bool] [default=false]: Whether robot pose is read from pose tree or RX channel
isaac.map.Map
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- graph_file_name [string] [default=”“]: Filename under which to store the current graph whenever there is an update to the map.
- config_file_name [string] [default=”“]: Filename under which to store the current configuration whenever there is an update to the map.
isaac.map.MapBridge
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- request [nlohmann::json]: Request to the MapBridge
Outgoing messages
- reply [nlohmann::json]: Reply from the MapBridge
- Parameters
isaac.map.ObstacleAtlas
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- static_frame [string] [default=”world”]: Frame which can be considered static, it is used to do time synchronization of obstacles.
isaac.map.OccupancyGridMapLayer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- filename [string] [default=]: Filename of greyscale PNG which will be loaded as the occupancy grid map
- cell_size [double] [default=]: Size of one map pixel in meters
- threshold [double] [default=0.4]: Threshold used to compute the distance map. Cells with a value larger than this threshold are assumed to be blocked.
isaac.map.PolygonMapLayer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- polygons [json] [default=nlohmann::json::object()]: A json object from configuration containing the polygons.
Layout:
{
"poly1": {
"points": [[<polygon point1>], [<polygon point2>]],
},
}
- color [Vector3i] [default=(Vector3i{255, 0, 0})]: Layer color.
- frame [string] [default=”world”]: Frame the polygons are defined in.
- obstacle_max_distance [double] [default=1.5]: The maximum distance to consider to create the obstacle from the polygons
- obstacle_pixel_size [double] [default=0.1]: The resolution of the map used to create the obstacle from the polygons
isaac.map.Spline
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- json [JsonProto]: Outgoing message that defines this spline
Parameters
- filename [string] [default=]: If set, spline points will be read from file. Expected layout:
- {
-
- “keypoints”: [
[2.3, 4.5], [1.1, 7.5], [0.0, -4.5], [-2.2, 0.1] ], “knot”: 0.5
}
isaac.map.WaypointMapLayer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- waypoints [json] [default=nlohmann::json::object()]: A json object from configuration containing the waypoints.
Layout:
{
"wp1": { "pose": [1,0,0,0,0,0,0], "radius": 0.5 },
"wp3": { "pose": [1,0,0,0,0.1,-1.2,0], "color": [54.0, 127.0, 255.0] }
}
isaac.message_generators.CameraGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- color_left [ColorCameraProto]: Random left color image
- color_right [ColorCameraProto]: Random right color image
- depth [DepthCameraProto]: Random depth image
Parameters
- rows [int] [default=1080]: The number of rows for generated data
- cols [int] [default=1920]: The number of columns for generated data
isaac.message_generators.ConfusionMatrixGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- confusion_matrix [ConfusionMatrixProto]: Output segmentation prediction with regulated probabilities
- Parameters
isaac.message_generators.Detections2Generator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- mock_detections [Detections2Proto]: Output mocked detection proto message
Parameters
- detection_configuration [json] [default={}]: Parameter defining the configuration of the detections we need to mock.
- Format: [ { “class_label”: “A”, “confidence”: 0.8,
The bounding box coordinates are of the form (x1, y1, x2, y2)
isaac.message_generators.DifferentialBaseControlGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- command [StateProto]: A StateProto representing a navigation::DifferentialBaseControl state which is populated with values specified via parameters
Parameters
- linear_speed [double] [default=0.0]: Linear speed in outgoing state message
- angular_speed [double] [default=0.0]: Angular speed in outgoing state message
isaac.message_generators.DifferentialBaseStateGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- state [StateProto]: Output state of differential base (DifferentialBaseDynamics)
Parameters
- linear_speed [double] [default=1.0]: Linear speed in outgoing state message
- angular_speed [double] [default=0.1]: Angular speed in outgoing state message
- linear_acceleration [double] [default=-0.1]: Linear acceleration in outgoing state message
- angular_acceleration [double] [default=0.05]: Angular acceleration in outgoing state message
isaac.message_generators.FlatscanGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- flatscan [FlatscanProto]: Outgoing “flat” range scan
Parameters
- invalid_range_threshold [double] [default=0.2]: Beams with a range smaller than or equal to this distance are considered to have returned an invalid measurement.
- out_of_range_threshold [double] [default=100.0]: Beams with a range larger than or equal to this distance are considered to not have hit an obstacle within the maximum possible range of the sensor.
- beam_count [int] [default=1800]: Number of beams in outgoing message
- angles_range [Vector2d] [default=Vector2d(0.0, TwoPi<double>)]: Azimuth angle range for the beams
- range_mean [double] [default=20.0]: Mean value for the ranges.
- range_standard_deviation [double] [default=]: Standard deviation for the range values. Requires an alice::Random component in the same node.
isaac.message_generators.HolonomicBaseControlGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- command [StateProto]: A StateProto representing a navigation::HolonomicBaseControls state which is populated with values specified via parameters
Parameters
- speed_angular [double] [default=0.0]: Angular speed in counter-clockwise direction
- speed_linear_x [double] [default=0.0]: Linear speed in forward direction
- speed_linear_y [double] [default=0.0]: Linear speed in left direction
isaac.message_generators.ImageLoader
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- color [ColorCameraProto]: Output the image proto for yolo inference
- depth [DepthCameraProto]: Output the image proto for yolo inference
Parameters
- color_filename [string] [default=]: Path of the color image file. The image is expected to be a 3-channel RGB PNG.
- depth_filename [string] [default=]: Path of the depth image file. The image is expected to be a 1-channel 16-bit greyscale PNG.
- color_glob_pattern [string] [default=]: Path of the color image directory. The directory is expected to contain only 3-channel RGB PNG.
The directory name should be specified according to the rules set used by the shell (See glob(7), POSIX.2, 3.13). eg: ‘./*’ locates all file names in ./
- loop_images [bool] [default=true]: The images in the specified directory plays in a loop if set to true. Otherwise it plays once.
- sort_by_number [bool] [default=false]: The images in the directory are sorted by NUMBER when they are ‘NUMBER.jpg’ or ‘NUMBER.png’.
- depth_scale [double] [default=0.001]: A scale parameter to convert 16-bit depth to f32 depth
- distortion_model [string] [default=”brown”]: Image undistortion model. Must be ‘brown’ or ‘fisheye’
- focal_length [Vector2d] [default=]: Focal length in pixels
- optical_center [Vector2d] [default=]: Optical center in pixels
- distortion_coefficients [Vector5d] [default=Vector5d::Zero()]: Distortion coefficients (see the DistortionProto in Camera.capnp for details)
- min_depth [double] [default=0.0]: Minimum depth
- max_depth [double] [default=10.0]: Maximum depth
isaac.message_generators.LatticeGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- gridmap_lattice [LatticeProto]: Output lattice proto. This contains relevant information about the corresponding gridmap.
Parameters
- cell_size [double] [default=0.05]: Parameter defining the cell size in metres.
- dimensions [Vector2i] [default=Vector2i(256, 256)]: The dimensions of the grid map in pixels
- lattice_frame_name [string] [default=”gridmap_frame”]: The name of the lattice coordinate frame. This will be used to write the pose of the gridmap relative to the reference frame in the pose tree.
- reference_frame_name [string] [default=”ref”]: Name of the reference frame
- relative_offset [Vector2d] [default=Vector2d(0.0, -0.5)]: Percentage offset of robot relative to the map. The offset determines the position of the robot (or the reference frame) with respect to the grid map created. The origin of the grid map is considered to be at the top-left of the grid. The x parameter defines the percentage offset for the rows (positive is in the upward direction and negative is in the downward direction), and the y parameter defines the offset for the columns (positive is in the left direction and negative is in the right direction). Determining the offset using a percentage basis makes it agnostic to the dimensions of the map. The default value fixes the refrence frame at the top-center of the grid map.
isaac.message_generators.PanTiltStateGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- target [StateProto]: Output, List of AprilTag fiducials
Parameters
- pan_max_angle [double] [default=1.0471975511965976]: Max panning on one side in Rad
- pan_offset_angle [double] [default=0.0]: Degree offset for panning
- pan_speed [double] [default=0.1]: Speed for panning in round/second
- pan_mode [WaveMode] [default=WaveMode::kSinus]: Wave function for panning
- tilt_max_angle [double] [default=0.5235987755982988]: Max tilting on one side in degree
- tilt_offset_angle [double] [default=0.0]: Degree offset of tilting
- tilt_speed [double] [default=0.1]: Speed for tilting in round/second
- tilt_mode [WaveMode] [default=WaveMode::kSinus]: Wave function for tilting
isaac.message_generators.Plan2Generator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- plan [Plan2Proto]: The plan generated as specified via parameters
Parameters
- waypoints [std::vector<Pose2d>] [default=]: List of waypoint poses in the form of (angle, x, y).
Example configuration: “waypoints”: [
]
- plan_frame [string] [default=”world”]: Frame for the waypoints. Sets the plan frame in outgoing message.
- robot_frame [string] [default=”robot”]: Name of robot’s frame
- static_frame [string] [default=”world”]: Name of a frame that is not moving
- new_message_threshold [Vector2d] [default=Vector2d(1e-3, DegToRad(0.01))]: A new message will be published whenever change in poses exceeds this threshold. Values are for Euclidean distance and angle respectively.
isaac.message_generators.PointCloudGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- point_cloud [PointCloudProto]: Outgoing proto messages used to publish the point cloud messages.
Parameters
- point_count [int] [default=10000]: Total number of point to generate.
- point_per_message [int] [default=100]: Maximum number of points in a single given message.
- has_normals [bool] [default=false]: Whether there should be normals in the messages, as many as the number of points.
- has_colors [bool] [default=false]: Whether there should be colors in the messages, as many as the number of points.
- has_intensities [bool] [default=false]: Whether there should be intensities in the messages, as many as the number of points.
isaac.message_generators.PoseGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
- initial_pose [Pose3d] [default=Pose3d::Identity()]: Initial pose
- step [Pose3d] [default=Pose3d::Translation({1.0, 0.0, 0.0})]: The pose delta for every tick
isaac.message_generators.RangeScanGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- scan [RangeScanProto]: Outgoing range scan
Parameters
- azimuth_angle_range [Vector2d] [default=Vector2d(0.0, TwoPi<double>)]: Azimuth angle range for the beams. (2pi, 0) would produce counter-clockwise rotation.
- num_slices [int] [default=16]: Number of (horizontal) ray slices that cover azimuth_angle_range
- num_slices_per_message [int] [default=0]: Number of (horizontal) ray slices published with each message. 0 means publish num_slices each message. Needs to be smaller than num_slices.
- vertical_beam_angles [std::vector<double>] [default=std::vector<double>({DegToRad(-15.0), DegToRad(-7.0), DegToRad(-3.0), DegToRad(-1.0), DegToRad(+1.0), DegToRad(+3.0), DegToRad(+7.0), DegToRad(+15.0)})]: The (vertical) beam angles to use for every slice
- max_range [double] [default=100.0]: Out of range threshold
- min_range [double] [default=0.0]: Invalid range threshold
- range_domain_max [double] [default=110.0]: Max value of the range domain. Used when normalizing range values.
- delta_time [int] [default=50‘000]: Delay in microseconds between firing. Default is 20 Hz.
- intensity_denormalizer [double] [default=1.0]: Scale factor which can be used to convert an intensity value from an 8-bit integer to meters.
- height [double] [default=1.0]: The height of the lidar over the ground plane
- segments [std::vector<geometry::LineSegment2d>] [default={}]: Lines in range / height plane which define the world
Layout: [
]
isaac.message_generators.RigidBody3GroupGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- bodies [RigidBody3GroupProto]: Output group with a single body
Parameters
- body_name [string] [default=”dummy_body”]: Name of the body
- reference_frame [string] [default=”world”]: Reference frame for the body
- pose [Pose3d] [default=Pose3d::Identity()]: Pose of the body with respect to the reference frame
- linear_velocity [Vector3d] [default=Vector3d::Zero()]: Linear velocity of the body
- angular_velocity [Vector3d] [default=Vector3d::Zero()]: Angular velocity of the body
- linear_acceleration [Vector3d] [default=Vector3d::Zero()]: Linear acceleration of the body
- angular_acceleration [Vector3d] [default=Vector3d::Zero()]: Angular acceleration of the body
isaac.message_generators.TensorGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- sample [TensorProto]: Produced random list of tensors with the specified dimensions
Parameters
- dimensions [Vector3i] [default=Vector3i(3, 640, 480)]: Dimensions of the generated rank 3 tensor
- element_type [TensorGeneratorElementType] [default=TensorGeneratorElementType::kFloat32]: The element type for the tensor
isaac.message_generators.TrajectoryListGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- trajectories [Vector3TrajectoryListProto]: The output channel to send all generated trajectories.
Parameters
- frame [string] [default=”world”]: Reference frame for the generated trajectories.
- position_count [int] [default=60]: Number of positions in the generated trajectory.
- helix_radius [double] [default=5.0]: The radius of the vertical helix created as the made up trajectory.
- position_delta_angle [double] [default=0.1]: The delta angle between each positions in the generated trajectory.
isaac.ml.ColorCameraEncoderCpu
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rgb_image [ColorCameraProto]: Input RGB color image
Outgoing messages
- tensor [TensorProto]: A rank 3 tensor with image data normalized and transformed according to parameters.
Parameters
- rows [int] [default=960]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the height dimension of the downsampled image.
- cols [int] [default=540]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the width dimension of the downsampled image.
- pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kNone]: Type of Normalization to be performed.
- tensor_index_order [ImageToTensorIndexOrder] [default=ImageToTensorIndexOrder::k012]: The indexing order, default is {row, column, channel}
isaac.ml.ColorCameraEncoderCuda
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rgb_image [ColorCameraProto]: Input RGB color image
Outgoing messages
- tensor [TensorProto]: A rank 3 tensor with image data normalized and transformed according to parameters.
Parameters
- rows [int] [default=960]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the height dimension of the downsampled image.
- cols [int] [default=540]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the width dimension of the downsampled image.
- keep_aspect_ratio [bool] [default=true]: The aspect ration of the image is preserved during resizing, the ROI is centered and padded.
- pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kUnit]: Type of Normalization to be performed. Todo: Add additional normalization modes besides unit for cuda
- tensor_index_order [ImageToTensorIndexOrder] [default=ImageToTensorIndexOrder::k012]: The indexing order, default is {row, column, channel}
isaac.ml.ConfusionMatrixAggregator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- sample_metrics [ConfusionMatrixProto]: incoming metrics message
Outgoing messages
- accumulated_metrics [ConfusionMatrixProto]: outgoing accumulated metrics message
Parameters
- confusion_matrix_slice_index [int] [default=0]: Index to specify which slice of the confusion matrix we want to visualize. The slicing is done along the third dimension. Hence each slice of the matrix represents a 2D tensor which is a single confusion matrix for a particular intersection over union threshold.
isaac.ml.Detection3Encoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detection3 [Detections3Proto]: Input detection proto.
Outgoing messages
- tensor [TensorProto]: A tensor of dimensions (N, 8) where N is the number of rigid bodies. Channels are as follows:
Columns 0-2 are translations in the order (px, py, pz), Columns 3-6 are orientations in quaternions in the order (qw, qx, qy, qz), Column 7 is the class id of the rigid body Example: {{px_1, py_1, pz_1, qw_1, qx_1, qy_1, qz_1, id_1},
Parameters
- class_names [std::vector<std::string>] [default={}]: List of class names to detect as string
isaac.ml.DetectionComparer
Description
Evaluates the object detection predicted output as compared to the ground truth. Academic standards for evaluating object detection models involve conputing the confusion matrix parameters of the predicted detections over a range of intersection over union thresholds. Intersection over union between two bounding boxes is calculated by the following formula - (Area of intersection between the two bounding boxes) / (Area of their union) Thus, it defines the degree of overlap between two bounding box rectangles. If the two bounding boxes are prefectly overlapped, the intersection over union score for them would be 1.0.
This codelet computes a confusion matrix for each sample, of dimensions (num_classes + 1) * (num_classes + 1) * num_iou_thresholds. The element at (i, j, k) represents the number of detections in that sample which had ground truth class label as class i and were predicted as class j using intersection over union threshold of k. The last element of the 0th and 1st dimensions represent the background class (bg). An example confusion matrix for classes A and B for a single intersection over union threshold would be -
A ( 2 0 0 ) B ( 0 3 0 ) bg ( 0 0 0 )
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- ground_truth_detection [Detections2Proto]: Ground truth object detection input
- predicted_detection [Detections2Proto]: Predicted detection input
Outgoing messages
- metrics [ConfusionMatrixProto]: Output containing the evaluated metrics for the detections in a single sample.
The output message contains the following information - * Number of image samples over which the metrics were computed. * List of intersection over union thresholds over which the metrics were computed. * A 3D tensor representing the confusion matrices calculated over these intersection over union
Parameters
- intersection_over_union_thresholds [std::vector<double>] [default=std::vector<double>({0.5, 0.8, 0.95})]: List of intersection over union thresholds over which the metrics are computed
- class_names [std::vector<std::string>] [default={}]: The allowed class names for the ground truth and predicted detections. The confusion matrix is constructed by assigning an index to each of these classes and one for the background class. If a sample contains any class other than the ones specified here, it gets dropped.
isaac.ml.DetectionEncoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detection [Detections2Proto]: Input detection proto.
Outgoing messages
- tensor [TensorProto]: Detection encoded as a (N, 5) tensor where N is the number of bounding boxes. Channels are: (bb_min_x, bb_min_y, bb_max_x, bb_max_y, class_id).
Parameters
- class_names [json] [default={}]: The class names of our detection objects.
- area_threshold [double] [default=10.0]: The minimum area of bounding boxes
isaac.ml.DetectionImageExtraction
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_detections [Detections2Proto]: Input list of bounding boxes to crop to. Labels and confidence are not used.
- input_image [ColorCameraProto]: Input image from which to crop and resize.
Outgoing messages
- output_tensors [TensorProto]: Cropped and resized output as batch of image tensors.
Parameters
- downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.
- pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kUnit]: Type of Normalization to be performed.
- tensor_index_order [ImageToTensorIndexOrder] [default=ImageToTensorIndexOrder::k201]: The indexing order, default is {channel, row, column}.
isaac.ml.Detections3Comparer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- reference_poses [Detections3Proto]: Reference pose
- predicted_poses [Detections3Proto]: Predicted pose
Outgoing messages
- statistics [JsonProto]: Outputs statistics about the reference_poses and the predicted_poses.
- Parameters
isaac.ml.EvaluateSegmentation
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segmentation_ground_truth [TensorProto]: Ground truth segmentation input
- segmentation_prediction [SegmentationPredictionProto]: Predicted segmentation
- Outgoing messages
- Parameters
isaac.ml.FilterDetectionsByLabel
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_detections [Detections2Proto]: A list of detections which may have different labels.
Outgoing messages
- output_detections [Detections2Proto]: A subset of the input detections, filtered by matching the included and excluded labels.
Parameters
- whitelist_labels [std::vector<std::string>] [default=]: The labels for which to include only detections by string match. Includes all if not set. NOTE Set either whitelist_labels or blacklist_labels, not both.
- blacklist_labels [std::vector<std::string>] [default=]: The labels for which to exclude some detections by string match. Excludes none if not set. NOTE Set either whitelist_labels or blacklist_labels, not both.
isaac.ml.GenerateKittiDataset
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ColorCameraProto]: Image input (must be a three-channel color image of image3ub)
- detections [Detections2Proto]: Detections associated with the image input
- Outgoing messages
Parameters
- num_training_samples [int] [default=1000]: The total number of training samples to generate
- num_testing_samples [int] [default=100]: The total number of testing samples to generate
- path_to_dataset [string] [default=]: Path to the root of the KITTI dataset. See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#kitti_file for file structure and organization.
isaac.ml.HeatmapDecoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensor [TensorProto]: Input tensor containing heatmap of probabilities
Outgoing messages
- heatmap [HeatmapProto]: Output heatmap proto
Parameters
- grid_cell_size [double] [default=2.0]: Cell size (in metres) of every pixel in heatmap
- map_frame [string] [default=”world”]: The pose map frame for the heatmap
isaac.ml.HeatmapEncoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- heatmap_proto [HeatmapProto]: Input heatmap proto containing heatmap image of probabilities
Outgoing messages
- heatmap_tensor [TensorProto]: Heatmap encoded as a tensor
- Parameters
isaac.ml.LabelToBoundingBox
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segmentation [SegmentationCameraProto]: Input image with class and instance labels.
Outgoing messages
- detections [Detections2Proto]: Computed bounding boxes - one AABB for every occuring instance/label combination.
Parameters
- resolution [int] [default=1]: The target resolution when computing bounding boxes. A value of 1 means bounding boxes are pixel-accurate. A value of 3 would mean bounding boxes are accurate up to 3 pixels.
- min_bbox_size [int] [default=1]: Minimum size in pixels across the two dimensions of the rectangle to be considered as non-zero size bounding box. A value of 1 means the bounding box rectangle length and breadth must be at least one pixel.
isaac.ml.ResizeDetections
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections2Proto]: Input detections associated with an image of dimensions specified by the input_image_dimensions ISAAC_PARAM
Outgoing messages
- resized_detections [Detections2Proto]: Output detections associated with an image of dimensions specified by the output_image_dimensions ISAAC_PARAM
Parameters
- input_image_dimensions [Vector2d] [default=]: Resolution of the image (rows, cols) that the input detections were computed for.
- output_image_dimensions [Vector2d] [default=]: Resolution of the image (rows, cols) that the output detections should be transformed to.
isaac.ml.RigidbodyToDetections3
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rigid_bodies [RigidBody3GroupProto]: Input list of 3D rigid body poses in Isaac SDK coordinate frame
Outgoing messages
- detections [Detections3Proto]: Output list of 3D rigid body poses with respect to desired rigid body coordinate frame from the input list. Index of this reference rigid body in the input list is given by input parameter, ref_frame_id.
Parameters
- ref_frame_id [int] [default=0]: Index of the rigid body in input list that is used as reference coordinate frame for publishing the poses of all the rigid bodies in the input list. If ref_frame_id < 0, object poses are published with respect to Isaac SDK coordinate frame.
isaac.ml.SampleAccumulator
Description
Collects training samples and makes them available for machine learning.
Each sample contains of a list of tensors. Tensors must currently be based on 32-bit floats. This codelet does not use macros to define input channels. Instead input channels are created based on the parameter channel_names.
Note: SampleAccumulator processes one sample at a time which might lead to message loss with many channels at a high data rate.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- sample_buffer_size [int] [default=256]: Number of training samples to keep in the buffer
- randomize_samples [bool] [default=true]: Randomize the order of samples in the buffer when true
- channel_names [std::vector<std::string>] [default={“samples”}]: Names of input channels. A sample will contain one tensor for each input channel in the given order.
isaac.ml.SegmentationComparer
Description
Evaluates the segmentation predicted output as compared to the ground truth. This codelet computes a confusion matrix for each sample, of dimensions (num_classes + 1) * (num_classes + 1) * num_thresholds. The element at (i, j, 0) represents the number of pixels in that sample which had ground truth class label as class i and were predicted as class j. The last element of the 0th and 1st dimensions represent the pixels that do not belong to any of the classes in consideration (N/A). Typically the pixels which belong to this category are - 1. The ones which have an index higher than the number of classes in consideration, as determined
2. The ones which were assigned the index of the unknown class in TensorArgMax. An example confusion matrix for classes A and B would be -
A ( 2 0 0 ) B ( 0 3 0 ) N/A ( 0 0 0 )
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- ground_truth [TensorProto]: Ground truth object segmentation input
- prediction [TensorProto]: Predicted segmentation input
Outgoing messages
- metrics [ConfusionMatrixProto]: Output containing the evaluated metrics for the segmentations in a single sample.
Parameters
- argmax_threshold [double] [default=0.5]: The discretization threshold that was used to convert a 3 dimensional tensor to a 2 dimensional tensor in TensorArgMax. This parameter is repeated here so as to fill the “thresholds” parameter in the ConfusionMatrixProto. During evaluation, it’s important to know the confidence threshold that was used to decide if a prediction was valid or not, since the confusion matrix produced could vary depending on this threshold. Hence, although the inference results provided to this comparer codelet have already been filtered based on a threshold, we repeat the parameter for information so that it can be propagated downstream.
- number_of_classes [int] [default=0]: Number of classes expected from ground truth and prediction. This information is needed to build a confusion matrix of the appropriate size and to determine the class indices to compare. For example, if this value is set to 2, we’d consider classes 0 and 1 while comparing the ground truth and predicted segmentations.
isaac.ml.SegmentationDecoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensors [TensorProto]: The input tensor contains semantic segmentation label prediction where each pixel has a probability distribution over all classes. Dimensions: (rows, cols, number of classes)
Outgoing messages
- segmentation_prediction [SegmentationPredictionProto]: Output segmentation prediction proto which contains the class information
Parameters
- class_names [json] [default={}]: name of the classes in an array. Each class is represented by a string. The number of classes must match the nmumber of classes in the tensor input.
isaac.ml.SegmentationEncoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segmentation [SegmentationCameraProto]: Input segmentation image.
Outgoing messages
- tensor [TensorProto]: Ouput tensor encoding the segmentation image. Dimensions are (row, cols, channels) where the number of channels is 1 if the number of classes is 2 (as in binary segmentation), or (n + 1) if the number of classes is ‘n’ (as in multiclass segmentation).
Parameters
- rows [int] [default=256]: Height of the downsampled segmentation image
- cols [int] [default=512]: Width of the downsampled segmentation image
- offset [int] [default=1]: Offset by which the actual pixel value differs from the label index. The segmentation image comes with a per-pixel integer labelling which denotes the object it belongs to. This integer is in turn mapped to a string label name in the format <object_name>:<label_index>. In some cases (such as NavSim), the label index could be equal to the pixel value, and in other cases (such as IsaacSim) they could differ by a fixed offset value.
- class_label_indices [std::vector<int>] [default=]: The pixel values that are to be encoded as valid classes, in case the labels are not provided. These are only used in the absence of the strings labels to be encoded, and if we specify the input mode as “NoLabelsAvailable”
- input_mode [InputMode] [default=InputMode::kLabelsAvailable]: Expected data input mode. When the string labels are available, the mode would be “LabelsAvailable” When there are no string labels available, the mode would be “NoLabelsAvailable”. In this case, the encoder looks for the pixel values which are to be encoded as valid classes.
- output_type [OutputMode] [default=OutputMode::kDistribution]: Parameter defining the format of the output tensor. If the mode is “Index”, we publish a 3D integer tensor, where each element at position (row, col, 0) represents the index of the class that the corresponding pixel at position (row, col) belongs to. If the mode is “Distribution”, we publish a tensor representing the probability distribution over the classes.
- class_label_names [std::vector<std::string>] [default={}]: A list of string labels representing the classes which need to be encoded. Typically, the input proto message contains - * An image where each pixel has a numerical value that represents its class. * The mapping of these numerical values to their string labels. We might want to encoded a subset of these classes in the output tensor. This subset is determined by the class_label_names parameter. Pixels which belong to classes other than the ones specified in this parameter are counted as “everything else”.
isaac.ml.Teleportation
Description
Teleportation is a class that generates random poses and sends them to an actor group codelet. Output pose is generated in 4 steps:
- relative_frame: This optional pose can be supplied as an input message. It is useful when chaining Teleportation codelets.
- base_pose: Pose is picked in one of the two modes:
- box mode: Uniform randomly pick each pose value from given ranges, i.e., yaw angle is between min_yaw and max_yaw.
- spline mode: Uniform randomly pick a pose that is tangent to the given spline.
- noise_pose: Gaussian noise generated using given mean and standard deviation values.
- offset_pose: Pose applied to transform frames. Supplied by user as a parameter.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- relative_frame [Pose3dProto]: Proto used to receive a reference frame (pose)
Outgoing messages
- rigid_command [RigidBody3GroupProto]: Proto used to publish rigid body pose to the sim bridge
- relative_frame_cmd [Pose3dProto]: Proto used to publish rigid body pose to another teleportation codelet as a reference frame
Parameters
- enable_on_relative_frame [bool] [default=false]: Flag to tick on relative frame message
- name [string] [default=”“]: Name of actor to teleport
- min_scale [double] [default=0.0]: Mimimum multiplicational scale factor of corresponding objects in simulation
- max_scale [double] [default=1.0]: Maximum multiplicational scale factor of corresponding objects in simulation
- enable_scale [bool] [default=false]: Flag to enable scale
- base_mode [BaseMode] [default=BaseMode::kBox]: Parameter for step 1. Specifies how the base pose will be generated. Please see codelet summary for a list of modes.
- min [Vector3d] [default=Vector3d::Zero()]: Parameter for “box” mode of step 1. Minimum translation in X, Y, Z coordinates
- max [Vector3d] [default=Vector3d(1.0, 1.0, 1.0)]: Parameter for “box” mode of step 1. Maximum translation in X, Y, Z coordinates
- enable_translation_x [bool] [default=true]: Parameter for “box” mode of step 1. Flag to enable translation (X)
- enable_translation_y [bool] [default=true]: Parameter for “box” mode of step 1. Flag to enable translation (Y)
- enable_translation_z [bool] [default=true]: Parameter for “box” mode of step 1. Flag to enable translation (Z)
- min_roll [double] [default=0.0]: Parameter for “box” mode of step 1. Minimum roll change after a teleoperation
- max_roll [double] [default=TwoPi<double>]: Parameter for “box” mode of step 1. Maximum roll change after a teleoperation
- enable_roll [bool] [default=false]: Parameter for “box” mode of step 1. Flag to enable rotation (roll)
- min_pitch [double] [default=0.0]: Parameter for “box” mode of step 1. Minimum pitch change after a teleoperation
- max_pitch [double] [default=TwoPi<double>]: Parameter for “box” mode of step 1. Maximum pitch change after a teleoperation
- enable_pitch [bool] [default=false]: Parameter for “box” mode of step 1. Flag to enable rotation (pitch)
- min_yaw [double] [default=0.0]: Parameter for “box” mode of step 1. Minimum yaw change after a teleoperation
- max_yaw [double] [default=TwoPi<double>]: Parameter for “box” mode of step 1. Minimum yaw change after a teleoperation
- enable_yaw [bool] [default=false]: Parameter for “box” mode of step 1. Flag to enable rotation (yaw)
- spline_distance [double] [default=0.02]: Parameter for “spline” mode of step 1. We will travel for this fraction of the spline distance before uniformly randomly sampling a new point on the spline again.
- spline_speed [double] [default=0.005]: Parameter for “spline” mode of step 1. Speed of travel. Unit is fraction of spline length per second. Negative speed corresponds to driving backwards.
- spline_flip_probability [double] [default=0.5]: Parameter for “spline” mode of step 1. With this probability, the direction of the tangent will be flipped.
- translation_standard_deviation [Vector3d] [default=Vector3d::Zero()]: Parameter for step 2. A noise for the translation with this standard deviation will be applied.
- roll_pitch_yaw_standard_deviation [Vector3d] [default=Vector3d::Zero()]: Parameter for step 2. A noise for the angles with this standard deviation will be applied.
- offset_pose [Pose3d] [default=Pose3d::Identity()]: Parameter for step 3. Offset pose to be applied to the combined pose.
isaac.ml.TensorArgMax
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input [TensorProto]: Input rank-3 tensor with dimensions (rows, columns, channels).
Outgoing messages
- argmax [TensorProto]: Output rank-2 tensor with dimensions (rows, columns) which stores the index of the max channel.
Parameters
- threshold [double] [default=0.5]: Threshold which defines if the argmax channel index for a pixel will be assigned to the 2 dimensional tensor. If the value at the argmax index is less than this threshold, the correponding element in the discretized tensor is assigned the unknown class index.
- non_max_index [int] [default=-1]: Value to be assigned for the unknown class.
isaac.ml.TensorChannelSum
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensor [TensorProto]: Input tensor having dimensions (rows, columns, channels)
Outgoing messages
- image [ImageProto]: Output image having dimensions (rows, columns, 2)
Parameters
- channel_zero_class_indices [std::vector<int>] [default=]: Channel indices of the input tensor which are to be added to compute the pixel values in channel 0 of the output image.
- channel_one_class_indices [std::vector<int>] [default=]: Channel indices of the input tensor which are to be added to compute the pixel values in channel 1 of the output image.
isaac.ml.TensorRTInference
Description
This codelet loads a frozen neural network model into memory, generates an optimized TensorRT engine, evaluates the model using tensors of type TensorProto received on RX channels, and publishes the network’s output tensors on TX channels of type TensorProto.
Please refer to Tensorflow inference for an explanation on how to setup input and output channels.
Note: TensorRT always uses planar storage order for images, and not interleaved storage.
Note: Batch dimension is optional, i.e. both (1, 3, 480, 640) and (3, 480, 640] are allowed.
See the Machine Learning Workflow section of the Development Guide for more information.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- input_tensor_info [json] [default={}]: Input Tensor Information in JSON, example:
[{"operation_name":"input", "dims":[1,3,480,640]}]
- output_tensor_info [json] [default={}]: Output Tensor Information in JSON example:
[{ "operation_name": "output", "dims": [1,1000] }]
See the Machine Learning Workflow section of the Development Guide for more information.
- model_file_path [string] [default=]: Path to the frozen model, in .uff, .onnx, or .etlt formats. Note also: engine_file_path
- engine_file_path [string] [default=]: Path to the CUDA engine which is used for inference (input or location for the cached engine)
- etlt_password [string] [default=]: Password used to decrypt the model if it is of ETLT format (optional).
- force_engine_update [bool] [default=false]: Force update of the CUDA engine, even if input or cached .plan file is present. Debug feature.
- inference_mode [InferenceMode] [default=InferenceMode::kFloat16]: Parameter to define the inference mode. The default value is Float16
- max_batch_size [int] [default=]: Maximum batch size. The default value could be infered from input_tensor_info parameter. Note, if the batch size in the input_tensor_info is variable (-1), this is a required parameter.
- max_workspace_size [int64_t] [default=67108864]: Maximum workspace size. The default value is 64MB
- plugins_lib_namespace [string] [default=]: TensorRT plugins library namespace, optional, set to enable plugins. Note, an empty string is a valid value for this parameter and it specifies the default TensorRT namespace.
- device_type [DeviceType] [default=DeviceType::kGPU]: The device that this layer/network will execute on, GPU or DLA.
- allow_gpu_fallback [bool] [default=true]: Allow fallback to GPU, if this layer/network can’t be executed on DLA.
- verbose [bool] [default=false]: Enable verbose log output, this option enables logging of DNN optimization progress, it is disabled by default, as the output of TensorRT optimization results in too many log messages even for LOG_LEVEL_DEBUG level.
isaac.ml.TensorReshape
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_tensors [TensorProto]: This tensor will be reshaped to the desired dimensions.
Outgoing messages
- output_tensors [TensorProto]: The tensor with based on input data but with the desired dimensions
Parameters
- output_tensor_dimensions [std::vector<int>] [default={}]: tensor shape information for each tensor in the list. It must be an array of arrays where the number of arrays must equal to the number of tensors in input_tensors.
isaac.ml.TensorflowInference
Description
A codelet to run inference for a Tensorflow model.
The codelet loads the model specified with the parameters model_file_path and config_file_path in the start function. The expected name and shape of input and output channels is defined via the parameters input_tensor_info and output_tensor_info.
This codelet does not use macros to define input and output channels. Instead channels are automatically setup based on the information in input_tensor_info and output_tensor_info parameters. By default the ops_name is used as the channel name. However sometimes this name is too long or not a valid identifier. In that case the channel name can be specified via channel. Valid channel names must only contain alpha-numeric characters or underscores. For example consider the following configuration for input_tensor_info:
[
{
"ops_name": "layer4/misc/baseline",
"channel": "misc_baseline",
"index": 1,
"dims": [1, 20, 30, 2]
},
{
"ops_name": "image",
"index": 0,
"dims": [1, 276, 276, 3]
}
]
This will generate two input channels with names misc_baseline and image. These names can be used directly in graph JSON files to specify edges. For example:
Warning: Currently only 32-bit floating point tensors are accepted as input and output will always be 32-bit floating point tensors.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- input_tensor_info [json] [default={}]: Information about the name and shape of inputs to the Tensorflow model in JSON format. This information is used to setup input channels. Example:
[ { "ops_name": "input", "index": 0, "dims": [1, 224, 224, 3] } ]
- output_tensor_info [json] [default={}]: Informatino about the name and shape of outputs of the Tensorflow model in JSON format. This information is used to setup output channels.
- model_file_path [string] [default=]: Model_data with contents from specified file
- config_file_path [string] [default=]: Config_data with contents from specified file
isaac.ml.TorchInference
Description
This codelet loads a trained Torch model and runs inference with the model.
This codelet does not use macros to define input and output channels. Instead channels are automatically setup based on the rx_channel_names and tx_channel_names parameters. Valid channel names must only contain alpha-numeric characters or underscores.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- model_file_path [string] [default=]: Path to the Torch model file
- message_time_to_live [double] [default=]: Messages waiting in the queue for more than this duration will be skipped.
- rx_channel_names [std::vector<std::string>] [default={“input”}]: Name of input channels. By default a single channel named input
- tx_channel_names [std::vector<std::string>] [default={“output”}]: Name of output channels. By default a single channel named output
isaac.ml.WaitUntilDetection
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_detections [Detections2Proto]: Detections from the object detection inference components. The codelet parses this list to look for the presence of one or more user-defined class labels.
Outgoing messages
- output_detections [Detections2Proto]: Output detections containing the bounding boxes in the scene for the class/classes in question. The output is published when at least one of the classes that we are interested in is detected the required number of times.
Parameters
- labels_to_match [std::vector<std::string>] [default=]: Names of the labels to look for in the predicted detections.
- required_detection_number [int] [default=1]: The number of times an object needs to be detected before we consider it as a true positive
isaac.ml.YoloDecoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bounding_box_tensor [TensorProto]: Bounding Box Parameters which include {{bounding_box1{x1, y1, x2, y2}, objectness, {probability0, probability1,…probability<N>}},
- net_config_tensor [TensorProto]: Network config parameters which include {network_width, network_height, image_width, image_height, number_classes trained on, number of parameters for each bounding box{excluding probability of classes} Example: For a yolov3-tiny network trained on 6 classes with network height 416x416 which runs inference on an image size 1280x720 where the number of parameters if 5 which includes 4 bounding boxes and objectness score, the tensor representing network config includes tensor(0-5) = {416, 416, 1280, 720, 6, 5}
Outgoing messages
- detections [Detections2Proto]: Output detections with bounding box, label, and confidence Poses will not be populated here.
Parameters
- nms_threshold [double] [default=0.6]: NonMaximum Supression threshold
- confidence_threshold [double] [default=0.6]: Confidence threshold of the detection
- labels_file_path [string] [default=”labels.txt”]: Path of the labels file with the name of classes trained by the network Every line of the labels file corresponds to the class name
isaac.navigation.BinaryToDistanceMap
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- binary_map [ImageProto]: Incoming binary map which will be converted to distance map (Image1ub 0 is free 255 occupied)
- binary_map_lattice [LatticeProto]: Lattice information of the binary map
Outgoing messages
- distance_map [ImageProto]: Outgoing distance map which indicates the distance to nearest obstacles for every map cell
Parameters
- max_distance [double] [default=10.0]: The maximum distance used for the distance map (in meters)
- blur_factor [int] [default=0]: If set to a value greater than 0 the distance map will be blurred with a Gaussian kernel of the specified size.
- compute_distance_inside [bool] [default=false]: If enabled the distance map will also be included inside obstacles. The distance is negative and measures the distance to the obstacle boundary. Otherwise the distance inside obstacles will be 0.
- distance_map_quality [int] [default=2]: Specifies the desired quality of the distance map. Possible values are:
0: Uses the QuickDistanceMapApproximated algorithm which is fast but produces artefacts 1: Uses QuickDistanceMap with queue length of 25 2: Uses QuickDistanceMap with queue length of 100 3: Uses DistanceMap which computes an accurate distance map but is quite slow - obstacle_name [string] [default=”local_map”]: Name used to register the map into the obstacle_atlas component.
isaac.navigation.CollisionMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- collision [CollisionProto]: Collision message from simulation.
Outgoing messages
- report [Json]: Log of the collision event in json format.
Parameters
- reference_frame [string] [default=”unity”]: Reference frame for the poses in the collision message
- collsion_color [Vector4ub] [default=(Vector4ub{200, 100, 0, 255})]: Color of the collision contact point to display in sight
- collsion_radius [double] [default=0.15]: Radius of the collision contact point to display in sight
isaac.navigation.DetectionsToAtlas
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of detections with their 3D poses in robot frame. Each dection will be converted into a 2D polygon obstacle and will be added to the Polygon layer of the same node.
- Outgoing messages
Parameters
- obstacle_outline [std::vector<Vector2d>] [default=]: Polygon outline used for every received detection. Currently every object gets the same outline
isaac.navigation.DifferentialBaseOdometry
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- state [StateProto]: Incoming current dynamic state of the differential base which is used to estimate its ego motion in an odometry frame (type: DifferentialBaseDynamics)
Outgoing messages
- odometry [Odometry2Proto]: Outgoing ego motion estimate for the differential base.
Parameters
- max_linear_acceleration [double] [default=5.0]: Maximum linear acceleration to use (helps with noisy data or wrong data from simulation)
- max_angular_acceleration [double] [default=5.0]: Maximum angular acceleration to use (helps with noisy data or wrong data from simulation)
- odometry_frame [string] [default=”odom”]: The name of the source coordinate frame under which to publish the pose estimate.
- robot_frame [string] [default=”robot”]: The name of the target coordinate frame under which to publish the pose estimate.
- prediction_noise_stddev [Vector7d] [default=(MakeVector<double, 7>({0.05, 0.05, 0.35, 0.05, 1.00, 3.00, 3.0}))]: 1 sigma of noise used for prediction model in the following order:
pos_x, pos_y, heading, speed, angular_speed, acceleration - observation_noise_stddev [Vector4d] [default=(Vector4d{0.25, 0.45, 2.0, 10.0})]: 1 sigma of noise used for observation model in the following order:
speed, angular_speed, acceleration
isaac.navigation.DifferentialBaseWheelImuOdometry
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- state [StateProto]: Incoming current dynamic state of the differential base which is used to estimate its ego motion in an odometry frame (type: DifferentialBaseDynamics)
- imu [ImuProto]: Optional measurement input from IMU for better accuracy
Outgoing messages
- odometry [Odometry2Proto]: Outgoing ego motion estimate for the differential base.
Parameters
- max_linear_acceleration [double] [default=5.0]: Maximum linear acceleration to use (helps with noisy data or wrong data from simulation)
- max_angular_acceleration [double] [default=5.0]: Maximum angular acceleration to use (helps with noisy data or wrong data from simulation)
- odometry_frame [string] [default=”odom”]: The name of the source coordinate frame under which to publish the pose estimate.
- robot_frame [string] [default=”robot”]: The name of the target coordinate frame under which to publish the pose estimate.
- prediction_noise_stddev [Vector7d] [default=(MakeVector<double, 7>({0.05, 0.05, 0.35, 0.05, 1.00, 3.00, 3.0}))]: 1 sigma of noise used for prediction model in the following order:
pos_x, pos_y, heading, speed, angular_speed, acceleration - observation_noise_stddev [Vector4d] [default=(Vector4d{0.25, 0.45, 2.0, 10.0})]: 1 sigma of noise used for observation model in the following order:
speed, angular_speed, acceleration - use_imu [bool] [default=true]: Enables/Disables the use of IMU
- weight_imu_angular_speed [double] [default=1.0]: Determines the trust in IMU while making angular speed observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average
- weight_imu_acceleration [double] [default=1.0]: Determines the trust in IMU while making linear acceleration observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average
isaac.navigation.DistanceMap
Description
A distance map which can be used to efficiently query the distance between a given point and the map contents.
This component is not yet thread-safe. Accessing the distance map can not happen in parallel with setting it.
If the component is added to a node with a OccupancyGridMapLayer component the distance map is automatically initialized with data from that component.
The component provides two lookup methods to access map data. In “nearest” mode the given map location is rounded to the nearest map cell location. In “smooth” mode bi-linear interpolation is used to return a smooth distance.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- map_frame [string] [default=”world”]: The coordinate frame of the distance map
isaac.navigation.FollowPath
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [Plan2Proto]: The path on which the robot should drive
- feedback [Goal2FeedbackProto]: Feedback about where we are with respect to the goal
Outgoing messages
- goal [Goal2Proto]: The desired goal waypoint
Parameters
- goal_frame [string] [default=”world”]: The name of the frame in which we the goal will be published. Needs to be set before start.
- stationary_wait_time [double] [default=5.0]: Seconds to wait before moving on to next waypoint if robot becomes stationary
- wait_time [double] [default=1.0]: Seconds to wait after arriving at a waypoint
- loop [bool] [default=false]: If set to true we will repeat following the path
- start_from_the_beginning [bool] [default=false]: Determines the behavior upon receiving a new plan message. If true, we start the path from the beginning. Otherwise, we head to the waypoint that is closest to the previous destination.
- report_success_on_arrival [bool] [default=false]: If set to true would reportSuccess() upon arrival of last pose. Shadowed by loop if set.
- num_waypoints_to_show [int] [default=5]: Number of upcoming waypoints on the route to show on Sight. 0 means show all the waypoints on the current route.
isaac.navigation.GoToBehavior
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- feedback [Goal2FeedbackProto]: Feedback from navigation stack
- Outgoing messages
- Parameters
isaac.navigation.GoalMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
- odometry [Odometry2Proto]: The odometry information with current speed
Outgoing messages
- feedback [Goal2FeedbackProto]: Feedback about the last received goal
Parameters
- arrived_position_thresholds [Vector2d] [default=Vector2d(0.5, DegToRad(15.0))]: Threshold on position to determine if the robot has arrived (positional and angular)
- stationary_speed_thresholds [Vector2d] [default=Vector2d(0.025, DegToRad(5.0))]: Threshold on speed to determine if the robot is stationary (positional and angular)
- robot_frame [string] [default=”robot”]: Name of the frame representing robot’s pose
isaac.navigation.GoalToPlan
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
Outgoing messages
- plan [Plan2Proto]: Plan consisting of a single 2D pose
- Parameters
isaac.navigation.GotoWaypointBehavior
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- feedback [Goal2FeedbackProto]: Goal feedback from the goal monitor.
- Outgoing messages
Parameters
- waypoint_name [string] [default=]: Parameter defining the name of the waypoint to be set
- waypoint_as_goal_component_name [string] [default=]: The name of the MapWaypointAsGoal component
isaac.navigation.GradientDescentLocalization
Description
A flatscan localization method using a gradient descent algorithm.
This codelet uses a flatscan to localize the robot in a known map. As this is a local optimization technique an initial guess is necessary. The computed pose of the scanner and thus the robot are written to the pose tree.
This method is quite stable compared to the more noisy particle-filter based approach. However it is a uni-modal technique which can not deal well with ambiguity.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Range scan used to localize the robot
- Outgoing messages
Parameters
- map [string] [default=”map”]: Name of map node to use for localization
isaac.navigation.GridSearchLocalizer
Description
An exhaustive grid search localizer.
Based on a flat range scan every possible pose in a map is checked for the likelihood that the scan was taken at that pose. The pose with the best match is written to the pose tree as a result.
This node uses a simplified and customized range scan model to increase the performance of the algorithm. The algorithm currently only works for a 360 degree range scan with constant angular resolution.
This component uses a GPU-accelerated algorithm. Depending on the map size and the GPU the runtime of the algorithm might range from less than a second to multiple seconds.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: The current sensor measurement based on which we try to localize in the map
- Outgoing messages
Parameters
- exclude_restricted_areas [bool] [default=true]: If false, robot may localize inside a restricted area defined in the map configuration
- robot_radius [double] [default=0.25]: The radius of the robot. This parameter is used to exclude poses which are too close to an obstacle.
- max_beam_error [double] [default=0.50]: The maximum beam error used when comparing range scans.
- num_beams_gpu [int] [default=256]: The GPU accelerated scan-and-match function can only handle a certain number of beams per range scan. The allowed values are {32, 64, 128, 256, 512}. If the number of beams in the range scan does not match this number a subset of beams will be taken.
- batch_size [int] [default=512]: This is the number of scans to collect into a batch for the GPU kernel. Choose a value which matches your GPU well.
- sample_distance [double] [default=0.1]: Distance between sample points in meters. The smaller this number, the more sample poses will be considered. This leads to a higher accuracy and lower performance.
- map [string] [default=”map”]: Name of map node to use for localization
- flatscan_frame [string] [default=”lidar”]: The name of the reference frame in which range scans arriving on the flatscan channel are defined.
isaac.navigation.HolonomicBaseWheelImuOdometry
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- state [StateProto]: Incoming current dynamic state of the holonomic base which is used to estimate its ego motion in an odometry frame.
- imu [ImuProto]: Optional measurement input from IMU for better accuracy
Outgoing messages
- odometry [Odometry2Proto]: Outgoing ego motion estimate for the holonomic base.
Parameters
- max_acceleration [double] [default=5.0]: Maximum acceleration to use (helps with noisy data or wrong data from simulation)
- odometry_frame [string] [default=”odom”]: The name of the source coordinate frame under which to publish the pose estimate.
- robot_frame [string] [default=”robot”]: The name of the target coordinate frame under which to publish the pose estimate.
- prediction_noise_stddev [Vector8d] [default=(MakeVector<double, 8>({0.05, 0.05, 0.35, 0.05, 0.05, 1.00, 3.00, 3.00}))]: 1 sigma of noise used for prediction model in the following order:
pos_x, pos_y, heading, speed_x, speed_y, angular_speed, acceleration_x, acceleration_y - observation_noise_stddev [Vector5d] [default=(MakeVector<double, 5>({0.25, 0.25, 0.45, 2.0, 2.0}))]: 1 sigma of noise used for observation model in the following order:
speed_x, speed_y, angular_speed, acceleration_x, acceleration_y - use_imu [bool] [default=true]: Enables/Disables the use of IMU
- weight_imu_angular_speed [double] [default=1.0]: Determines the trust in IMU while making angular speed observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average
- weight_imu_acceleration [double] [default=1.0]: Determines the trust in IMU while making linear acceleration observations. 1.0 means using IMU only. 0.0 means using segway data only. 0.5 means taking an average
isaac.navigation.LocalMap
Description
Creates and maintains a dynamic obstacle grid map centered around the robot.
The dynamic grid map is always relative to the robot with the robot at a fixed location in the upper part of the robot. The previous state of the grid map is continuously propagated into the presence using the robot odometry. Good odometry is critical to maintaining a sharp, high-quality grid map. New observation measurements are integrated into the local map and mixed with the current local map accumulated based on the past.
The local map “forgets” information over time to allow gradual dynamic updates. This enables it to be useful in the presence of dynamic obstacles. However thresholding might be challenging and additional object detection and tracking should be used for dynamic obstacles.
The the local map is published as a grid map with the same orientation as the robot. The internal storage of the local map is larger and has the orientation of the world frame. The is done to avoid unnecessary diffusion effects from coordinate transformations.
- There are a couple of relevant coordinate frames:
robot : The coordinate frame of the robot. odom : The robot pose is continuously estimates with respect to this pose. localmap : The “grid” coordinate frame of the local map starting in the top left corner of
workmap_grid : The “grid” coordinate frame of the workmap. workmap_center: The center of the workmap. The workmap is centered at the current position of
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- observation_map [ImageProto]: Image2f containing the observed/occupied information for each cell
- observation_map_lattice [LatticeProto]: …
Outgoing messages
- local_map_lattice [LatticeProto]: The latest dynamic obstacle grid map lattice grid information
- local_map [ImageProto]: The latest dynamic obstacle grid map
Parameters
- cell_size [double] [default=0.05]: Size of a cell in the dynamic grid map in meters
- dimensions [Vector2i] [default=Vector2i(256, 256)]: The dimensions of the grid map in pixels
- map_offset_relative [Vector2d] [default=Vector2d(-0.25, -0.5)]: Local offset of robot relative to the map relative to the total map size.
- map_decay_factor [double] [default=0.99]: Before integrating a new range scan the current map is decayed with this factor. The lower this parameter the more forgetful and uncertain the local map will be.
- visible_map_decay_factor [double] [default=0.92]: Cells which were observed have an additional decay to better deal with moving obstacles. This allows a different forgetfullness for cells which are currently visible.
- localmap_frame [string] [default=”localmap”]: The name of the map coordinate frame. This will be used to write the pose of the map relative to the robot in the pose tree.
- odom_frame [string] [default=”odom”]: The name of the coordinate frame used to continuously update the local map.
isaac.navigation.LocalizationEvaluation
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
- Parameters
isaac.navigation.LocalizationMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming range scan used to monitor the robot
- Outgoing messages
Parameters
- map [string] [default=”map”]: Name of map node which contains the reference map
- range_scan_model [string] [default=”shared_robot_model”]: Name of node which contains the RangeScanModel component which is used to compare range scans
- score_threshold [double] [default=0.05]: If comparison of the measured range scan against the expected range scan gives a score below this threshold the monitor reports failure. The range of the score depends on the range scan model, but is typical between 0 and 1 with 1 being the best.
- beam_distance_threshold [double] [default=0.2]: Beams where measured distance and expected distance are withing this tolerance are “good” beams.
- good_beams_threshold [double] [default=0.4]: If the percentage of good beams of the current range scan match drops below this threshold the monitor reports failure. See beam_distance_threshold.
- far_beams_threshold [double] [default=0.2]: If the percentage of far beams of the current range scan match grows above this threshold the monitor reports failure. A far beam is a beam where the measured distance is larger than the expected distance by beam_distance_threshold.
- flatscan_frame [string] [default=”lidar”]: Name of the coordinate frame of the sensor which produced the flatscan
- robot_frame [string] [default=”robot”]: Name of the coordinate frame of the robot base
isaac.navigation.LocalizeBehavior
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- global_rmse_threshold [double] [default=1.0]: If the RMSE of the global localizer falls below this threshold it is assumed to be localized.
- global_min_progress [double] [default=0.75]: Minimum progress of the global localizer before we start considering the error threshold
- local_score_threshold [double] [default=0.0]: If the score of the local localizer falls below this threshold it is assumed to be lost.
- sleep_between_state_changes [double] [default=5.0]: Duration before we consider switching between localizers
- global_min_error [string] [default=”global_localization/grid_search_localizer/min_error”]: Link to read the minimum error in global localization
- global_progress [string] [default=”global_localization/grid_search_localizer/progress”]: Link to read the progress in global localization
- local_max_score [string] [default=”scan_localization/isaac.navigation.ParticleFilterLocalization/max_score”]: Link to read the maximum score of scan localization
- skip_global_localization [bool] [default=false]: If enabled skip global localization at the beginning
isaac.navigation.MapWaypointAsGoal
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- desired_waypoint [GoalWaypointProto]: Receives the desired waypoint
Outgoing messages
- goal [Goal2Proto]: Output goal for the robot
Parameters
- map [string] [default=”map”]: Map node for looking up waypoints
- waypoint [string] [default=”“]: The waypoint which is published as the goal. If empty the current pose will be published.
isaac.navigation.MapWaypointAsGoalSimulator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- desired_waypoint [GoalWaypointProto]: Receives the desired waypoint. An empty string as waypoint name will be interpreted as stop.
- Outgoing messages
Parameters
- waypoint_map_layer [string] [default=”map/waypoints”]: Map node for looking up waypoints. If the target waypoint is not inside this may layer the simulated motion will stop.
- average_distance [double] [default=5.0]: The average distance between waypoints
- max_speed [double] [default=1.0]: The maximum traveling speed of the agent
isaac.navigation.MapWaypointsAsPlan
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- plan [Plan2Proto]: The plan generated as specified via parameters
Parameters
- map [string] [default=”map”]: Map node for looking up waypoints
- waypoints [std::vector<std::string>] [default=]: The list of waypoint names which is published as a plan. If a name doesn’t exist in map, whole list will be ignored.
- text_size [double] [default=20.0]: The size of the text used in sight, in pixels (px)
isaac.navigation.MoveAndScan
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- waypoints [Plan2Proto]: Input waypoint denoting waypoint poses.
Outgoing messages
- waypoints_with_orientations [Plan2Proto]: Output waypoint plan along with multiple angles of orientation
Parameters
- num_directions [int] [default=4]: Number of angles to turn the robot
isaac.navigation.MoveUntilArrival
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
- feedback [Goal2FeedbackProto]: Feedback from navigation stack
- Outgoing messages
Parameters
- navigation_mode [string] [default=”navigation_mode/isaac.navigation.GroupSelectorBehavior”]: Parameter to get navigation mode behavior
- behavior_stop [string] [default=”stop”]: Name of the mode which makes the robot stop
- behavior_move [string] [default=”navigate”]: Name of the mode which allows the robot move
isaac.navigation.NavigationMap
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- occupancy_grid_prefix [string] [default=”occupancy”]: The name prefix used for occupancy grid map layers
- waypoint_prefix [string] [default=”waypoints”]: The name prefix used for waypoint map layers
- restricted_area_prefix [string] [default=”restricted_area”]: The name prefix used for keep clear area map layers
- global_localization_area_prefix [string] [default=”localization_area”]: The name prefix used for global localization area map layers
isaac.navigation.NavigationMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- camera [ColorCameraProto]: Camera Input This is needed in order to publish the robot state with the acquisition time of the input image
Outgoing messages
- robot_state [RobotStateProto]: Proto used to publish the robot’s state (position, speed and siplacement since last update)
Parameters
- tick_periodically [bool] [default=true]: Boolean to determine if we need to tick periodically. During periodic ticks, we can check displacement once every interval and publish the output with current time as acquisition time. If we tick on message instead, the output can be published with the acquision time of the input message.
- angle_threshold [double] [default=DegToRad(15.0)]: Angle in radians that the robot needs to move before publishing
- distance_threshold [double] [default=0.5]: Distance in metres robot needs to move before publishing
- var_rx_speed_pos [string] [default=]: Linear speed as set by DifferentialBaseOdometry
- var_rx_speed_rot [string] [default=]: Angular speed as set by DifferentialBaseOdometry
isaac.navigation.OccupancyMapCleanup
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rough_map_lattice [LatticeProto]: An occupancy grid map lattice information
- rough_map [ImageProto]: An occupancy grid map which needs to be cleaned
Outgoing messages
- clean_map [ImageProto]: A clean occupancy grid map
Parameters
- clear_region_frame [string] [default=]: The coordinate frame of the clear region
- clear_region [geometry::RectangleD] [default=]: A small rectangular area around the robot with this shape is always marked as free to prevent the robot from seeing itself. If the rectangle is too big, nearby obstacle might be ignored. Format is [[x_min,x_max],[y_min,y_max]], unit is meters.
- additional_clear_region [geometry::RectangleD] [default=]: An additional region which will be cleared similar to clear_region.
isaac.navigation.OccupancyToBinaryMap
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- occupancy_map_lattice [LatticeProto]: Incoming occupancy map lattice information
- occupancy_map [ImageProto]: Incoming occupancy map which will be converted and stored
Outgoing messages
- binary_map [ImageProto]: Computed binary map (Image1ub)
Parameters
- mean_threshold [int] [default=128]: Grid cells in the cost map which have a mean value greater than this threshold are considered to be blocked.
- standard_deviation_threshold [int] [default=128]: Grid cells in the cost map which have a standard deviation greater than this threshold are considered to be uncertain.
- is_optimistic [bool] [default=false]: If enabled uncertain cells will be treated as free, otherwise they are considered to be blocked.
isaac.navigation.ParticleFilterLocalization
Description
Localizes the robot in a given map based on a flat range scan.
A Baysian filter based on a particle filter is used to keep track of a multi-modal hypothesis distribution. For every tick the particle distribution is updated based on an ego motion estimate read from the pose tree. Particles are then evaluated against the measured range scan using a range scan model to compute new particle scores. Particles with the highest score are combined in a weighted averaged to compute the new best estimate of the robot pose. The robot pose is written into the pose tree as a result.
Range scans are compared using a range scan model. In order for this node to work properly a component which is derived from RangeScanModel needs to be created and referenced in the parameter.
Particles are initialized in the start function using an initial estimate of the robot pose which is read from the pose tree. The GridSearchLocalizer component can for example be used to provide this initial estimate. Alternatively the initial pose could also be provided using a PoseInitializer component.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming range scan used to localize the robot
- flatscan_2 [FlatscanProto]: A second range scan which can be used to localize the robot.
Outgoing messages
- samples [Pose2Samples]: The current weight samples the particle is tracking
Parameters
- num_particles [int] [default=75]: The number of particles used in the particle filter
- absolute_predict_sigma [Vector3d] [default=Vector3d(0.04, 0.04, DegToRad(5.0))]: Standard deviation of Gaussian noise “added” to the estimated pose during the predict
step of the particle filter. This value is a rate per second and will be scaled by the time step. The used equation is of the form:
Note the use of sqrt(dt) for scaling the standard deviation which is required when summing up Normal distributions. The vector contains three parameters:
- noise along the forward direction (X axis)
- noise along the sidwards direction (Y axis)
- noise for the rotation
- relative_predict_sigma [Vector3d] [default=Vector3d(0.10, 0.10, 0.10)]: Standard deviation of Gaussian noise which is applied relative to the current speed of
- the robot and scaled by the timestep. The used equation is of the form:
The vector contains three parameters as explained in absolute_predict_sigma.
- initial_sigma [Vector3d] [default=Vector3d(0.3, 0.3, DegToRad(20.0))]: Standard deviation of Gaussian noise which is applied to the initial pose estimate when the particle filter is (re-)seeded.
- output_best_percentile [double] [default=0.10]: The final pose estimate is computed using the average of the best particles. For example a value of 0.10 would mean that the top 10% of particles with highest scores are used to compute the final estimate.
- reseed_particles [bool] [default=false]: Set to true to request reseeding particles. This will be reset to false when the particle filter was reseeded.
- map [string] [default=”map”]: Node of the map which contains map data. The map is used to compute which range scan would be expected from a hypothetical robot pose.
- range_scan_model [string] [default=”shared_robot_model”]: Name of the node which contains a component of type RangeScanModel which is then used to compare range scans when evaluating particles against a new incoming message.
- flatscan_frame [string] [default=”lidar”]: The name of the reference frame in which range scans arriving on the flatscan channel are defined.
- flatscan_2_frame [string] [default=”lidar_2”]:
isaac.navigation.ParticleSwarmLocalization
Description
An adaptive localization algorithm using a swarm of particles.
A particle swarm algorithm is used to localize the robot based on a single flat range scan. The pose with the best match is written to the pose tree as a result.
Consider using GridSearchLocalizer instead as it might provide a better particles to prescision ratio.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: The current sensor measurement based on which we try to localize in the map
- Outgoing messages
Parameters
- num_particles [int] [default=1000]: The number of particles used by PSO
- pso_omega [double] [default=0.5]: Omega parameter of PSO
- pso_phi [Vector3d] [default=(Vector3d{0.05, 0.05, 0.1})]: Phi parameter of PSO (values are for for dx, dy, da)
- pso_phi_p_to_g [double] [default=1.0]: PSO parameter to express ratio between phi_p and phi_g
- map [string] [default=”map”]: Map node to use for localization
isaac.navigation.PoseAsGoal
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- goal [Goal2Proto]: Output goal for the robot
Parameters
- goal_frame [string] [default=”pose_as_goal”]: Name of the goal coordinate frame
- reference_frame [string] [default=”world”]: Name of the reference coordinate frame
- static_frame [string] [default=”world”]: Name of a frame that is not moving
- new_message_threshold [Vector2d] [default=Vector2d(1e-3, DegToRad(0.01))]: A new message will be published whenever change in goal pose exceeds this threshold. Values are for Euclidean distance and angle respectively.
isaac.navigation.PoseHeatmapGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- robot_state [RobotStateProto]: Input robot state containing position, speed and the displacement since the last update
Outgoing messages
- heatmap [HeatmapProto]: Output HeatmapProto containing heatmap of probabilities, grid cell size and map frame
Parameters
- custom_cell_size [double] [default=2.0]: Desired size of each cell
- robot_radius [double] [default=0.40]: Robot radius
- kernel_size [int] [default=9]: Size of the gaussian kernel to diffuse weights
- map [string] [default=”map”]: Map node to use for localization
isaac.navigation.RandomMapPoseSampler
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- rhs_frame [string] [default=]: Name of the pose to set on the pose tree. lhs is world, determined by the RobotPoseGenerator.
- max_trials [int] [default=50]: Maximum number of trials to find a valid pose before giving up and report failure.
isaac.navigation.RandomWalk
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- feedback [Goal2FeedbackProto]: Feedback about our progress towards the goal
Outgoing messages
- goal [Goal2Proto]: Output goal for the robot
Parameters
- timeout [double] [default=10.0]: If the robot doesn’t move for this time period it will pick a new goal
- goal_position_threshold [double] [default=0.3]: Goal distance threshold sent to the planner
isaac.navigation.RangeScanModelClassic
Description
Scan-to-scan matching model after Fox-Burgard-Thrun
Range scan models describe how well two range scans match with each other. The matching result is expressed as a similarity value in the range [0,1]. Similar range scans will result in a value close to one, while dissimilar range scans will give a value close to zero.
Range scan models are for example used by scan localization components like the ParticleFilterLocalization or the GridSearchLocalizer. In order for these components to work properly you will have to create a range scan component inside a node and specify the corresponding configuration parameter for the localization components.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- noise_sigma [double] [default=0.25]: A parameter which defines the width of the Guassian for range measurement noise
- unexpected_falloff [double] [default=0.10]: A parameter which defines the shape of the beam model for unexpected obstacles
- max_range [double] [default=100.0]: The maximum range. If the beam range is equal to this value it is considered out of range
- weights [Vector4d] [default=Vector4d(0.25, 0.25, 0.25, 0.11)]: Weights of the four contributions for the beam model in the following order:
0: measurement noise 1: unexpected obstacles 2: random measurement 3: max range - smoothing [double] [default=0.01]: Smoothing factor for the overall shape function.
isaac.navigation.RangeScanModelFlatloc
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- max_beam_error_far [double] [default=0.50]: Each beam for which the measured range is further away than the expected range can contribute at most this value to the total error.
- max_beam_error_near [double] [default=0.50]: Similar to max_beam_error_far but for the case then the measured range is closer than the measured range
- percentile [double] [default=0.9]: Specifies the percentile of ranges to use to compute a combined distance over multiple beams. Valid range ]0,1]. If set to 1 all ranges are taken. If set to lower than 1 only the given percentile of beams with the lowest error is taken.
- max_weight [double] [default=15.0]: The maximum weight which can be given to a beam. Beams are weighted linearly based on the average between measured and expected distance up to a maximum of this value.
- sharpness [double] [default=5.0]: The error returned by the distance function is transformed to unit range using the following function: p = exp(-sharpness * error/max_beam_error). If sharpness is zero the actual error will be returned.
- invalid_range_threshold [double] [default=0.5]: Beams with a range smaller than or equal to this distance are considered to have returned an invalid measurement.
- out_of_range_threshold [double] [default=100.0]: Beams with a range larger than or equal to this distance are considered to not have hit an obstacle within the maximum possible range of the sensor.
isaac.navigation.RangeScanToObservationMap
Description
Compute an observation map from a flatscan. An observation map is represented as an Image2f where the first chanel corresponds to the probability of a cell to have been observed, while the second channel correspond to the position of a cell to be occupied.
The map is computed relative to the sensor position, the axis are in the same direction as the sensor frame, and the translation is controlled by the dimensions and map_offset_relative parameters.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: The observation_map is created based on flat range scans.
Outgoing messages
- observation_map [ImageProto]: Image2f containing the observed/occupied information for each cell
- observation_map_lattice [LatticeProto]: The lattice information of the ObservationMap
Parameters
- cell_size [double] [default=0.05]: Size of a cell in the dynamic observation map in meters
- dimensions [Vector2i] [default=Vector2i(256, 256)]: The dimensions of the observation map in pixels
- map_offset_relative [Vector2d] [default=Vector2d(0.25, 0.5)]: Local offset of robot relative to the map relative to the total map size.
- wall_thickness [double] [default=0.20]: When integrating a flatscan an area of the given thickness behind a hit is marked as solid. This value should be at least in the order of the chosen cell size.
- sensor_frame [string] [default=”lidar”]: The name of the reference frame in which range scans arriving on the flatscan channel are defined.
- sensor_lattice_frame [string] [default=”lidar_lattice”]: The name of the map coordinate frame. This will be used to write the pose of the map relative to the robot in the pose tree.
isaac.navigation.RobotPoseGenerator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- robot_model [string] [default=”navigation.shared_robot_model”]: The name of the robot model node which is used to find a valid goal
- map [string] [default=”map”]: The name of the map to generate poses on.
- static_obstacle_names [std::vector<std::string>] [default=std::vector<std::string>({“map/isaac.navigation.DistanceMap”, “map/restricted_area”})]: Name of the static obstacles.
- model_error_margin [double] [default=0.1]: The smallest distance (in meters) allowed between the robot and obstacles in the scene
isaac.navigation.RobotRemoteControl
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- js_state [JoystickStateProto]: Joystick state including information about which buttons are pressed
- ctrl [StateProto]: The command from our controller
Outgoing messages
- segway_cmd [StateProto]: The command send to the segway
Parameters
- disable_deadman_switch [bool] [default=false]: Disables deadman switch no matter a joystick is connected or not
- differential_joystick [bool] [default=true]: If set to true this is using a differential control model. Otherwise a holonomic control model is used.
- manual_button [int] [default=4]: The ID for the button used to manually control the robot with the gamepad. When this button is pressed on the joystick, we enter manual mode where we read speed commands from joystick axes. For a PlayStation Dualshock 4 Wireless Controller, this button corresponds to ‘L1’.
- autonomous_button [int] [default=5]: The ID for the button used to allow the AI to control the output. When this button is pressed but manual button is not pressed on the joystick, we enter autonomous mode where we read speed commands from controller that is transmiting to our ‘ctrl’ channel here. For a PlayStation Dualshock 4 Wireless Controller, this button corresponds to ‘R1’.
- move_axes [int] [default=0]: The axes used for translating the robot in manual mode. For a PlayStation Dualshock 4 Wireless Controller, these axes corresponds to the ‘left stick’.
- rotate_axes [int] [default=1]: The axis used for rotating the robot in manual mode. For a PlayStation Dualshock 4 Wireless Controller, these axes corresponds to the ‘right stick’.
- linear_speed_max [double] [default=1.0]: The maximal allowed manual speed for linear movements.
- angular_speed_max [double] [default=0.8]: The maximal allowed manual speed for rotation.
isaac.navigation.RobotViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- pose_estimate [Pose2MeanAndCovariance]: The current position estimate of the robot
- Outgoing messages
Parameters
- reference_frame [string] [default=”world”]: The name of the reference coordinate frame
- robot_pose_name [string] [default=”robot”]: Name of robot pose to look up on pose tree
- robot_color [Vector4ub] [default=(Vector4ub{0, 100, 150, 255})]: Color of the robot pose to display in sight
- robot_mesh [string] [default=”carter”]: Name of the robot assed used for display in sight.
- robot_model [string] [default=”navigation.shared_robot_model/SphericalRobotShapeComponent”]: Name of the robot model component
- trail_color [Vector4ub] [default=(Vector4ub{0, 150, 200, 255})]: Color of the robot trail trajectory used in sight.
- trail_count [int] [default=30]: The number of locations to show for the robot trail.
- trail_time_step [double] [default=0.5]: Time difference between points shown as part of the trail.
isaac.navigation.TravellingSalesman
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- waypoints [Plan2Proto]: Output plan, which is a list of poses that the robot can move to
Parameters
- max_distance_factor [double] [default=2.25]: Factor controlling the maximum distance between two points to be connected
- robot_radius [double] [default=0.50]: Robot radius
- target_cell_size [double] [default=0.50]: The size of step we take to look for freespace and put waypoints
- random_waypoints [int] [default=200000]: Number of random waypoints that we can try and add to the graph
- map [string] [default=”map”]: Name of the map in consideration
isaac.navigation.VirtualGamepadBridge
Description
Bridge for Virtual Gamepad: - Recieves Virtual Controller State Messages from Sight’s Widget - Virtual Gamepad. - Uses Bidirectional communication between backend and frontend. - Forwards the received controller messages to other c++ codelets
- Sends relevant backend status info from the codelets to Sight at regular intervals.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- request [nlohmann::json]: Request to the Bridge
Outgoing messages
- reply [nlohmann::json]: Reply from the bridge to Sight
- joystick [JoystickStateProto]: TX proto for Gampepad State
Parameters
- sight_widget_connection_timeout [double] [default=30.0]: Sight Widget Connection Timeout in seconds
- num_virtual_buttons [int] [default=12]: Number of buttons for a simulated Virtual Joystick. Keeping default value consistent with packages/sensors/Joystick.hpp
- deadman_button [int] [default=4]: Button number for failsafe. Keeping consistent with packages/navigation/RobotRemoteControl.hpp
isaac.navsim.ScenarioManager
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scenario_reply [JsonProto]: Incoming messages to get status of the current scene and scenario from NavSim
Outgoing messages
- scenario_control [JsonProto]: Outgoing messages to for example communicate the desired scene and scenario to NavSim
- robot [ActorGroupProto]: Outgoing messages to control actors creation in NavSim
Parameters
- scene [string] [default=]: The desired scene
- scenario [int] [default=-1]: The desired scenario. scenario<0 is ignored by scene loader and the default scenario is used
- robot_prefab [string] [default=]: The next three parameters are used to spawn and initialize robot in simulation Name of the robot prefab. See ActorGroupProto/SpawnRequest for detail.
- robot_name [string] [default=”robot”]: Name for the robot. See ActorGroupProto/SpawnRequest for detail.
- robot_pose_name [string] [default=”robot_init_gt”]: Rhs name for the initial robot pose.
- ref_pose_name [string] [default=”world”]: Lhs name for the initial robot pose.
isaac.navsim.ScenarioMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- collision [Json]: Collision message from collision monitor.
- goal_feedback [Goal2FeedbackProto]: Feedback from navigation stack.
- gt_goal_feedback [Goal2FeedbackProto]: Ground truth feedback from simulator.
- poses [Json]: Additional pose report to append to the detail report file.
Outgoing messages
- state [StateProto]: Current state of execution
Parameters
- wait_before_start [double] [default=1.0]: The number of seconds to wait in the start function for localization to finish
- arrival_tolerance [double] [default=0.5]: Seconds to wait for actual (ground-truth) arrival after robot claims arrival. If the robot doesn’t actually arrive within this time frame, it was wrong to claim arrival. So, the state becomes “Mistaken”.
- goal_pose_name [string] [default=”goal”]: Name of the goal pose on pose tree
- report_path [string] [default=”/tmp/navsim”]: Path of report output file. If the path doesn’t exist, it will be created. The coledet will generate a detailed report file (one json per tick) to report_path/[uuid]_monitor.jsonl where uuid is the app’s uuid.
- stop_app [bool] [default=false]: Stops the app when monitor succeeds or fails. This may happen before the execution state message tx_state can be processed, so set this to false if your app have codelets receiving and processing the tx_state message.
- scene [string] [default=]: Filename of the scene being run in simulation
- scenario [int] [default=]: Index of the scenario being run in simulation
- maximum_time [string] [default=”60s”]: Maximum execution time in seconds
- localization_error [Vector2d] [default=Vector2d(3.0, DegToRad(90.0))]: Localization tolerance beyond which we consider the robot lost. Unit is meter 2d position, and radian for z rotation
isaac.object_pose_estimation.CodebookLookup
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- features [TensorProto]: A batch of feature vectors encoded as a rank 2 tensor. Each feature will be checked against
- the codebook. The shape of the tensor is:
Outgoing messages
- codes [TensorProto]: A rank 3 tensor which stores the code vectors for each input feature vector. For each feature
- vector num_output code vectors are returned. The shape of the tensor is:
- correlations [TensorProto]: A rank 3 tensor which stores the correlation between code vectors and the input feature vector.
- There is one correlation per output code vector. The shape of the tensor is:
Parameters
- codebook_path [string] [default=]: Path to the file containing the codebook in line JSON format.
- The codebook must contain one line per code word with the following format:
Here (f_1, …, f_n) is the n-dimensional feature vector and (c_1, …, c_m) the m-dimensional code vector. The dimension of feature and code vectors must be identical for all entries.
- num_output [int] [default=2]: The number of features to extract. Only the best one will be provided
isaac.object_pose_estimation.CodebookPoseSampler
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- pose [Pose3dProto]: Pose to teleport the camera to
Parameters
- radius [std::vector<double>] [default={}]: List of radii of the spheres where the view points are sampled This enables codebook generation at multiple distances.
- center [Vector3d] [default=Vector3d::Zero()]: Center of the sphere where the view points are sampled Spheres of all radii are centered at this location.
- num_view [int] [default=2562]: Minimal number of view points on the view sphere
- num_inplane [int] [default=1]: Number of in-plane rotations at each view point
- min_roll [double] [default=-Pi<double>]: Minimum roll for for codebook view sampling from sphere Note: Minimum roll must be in range [-Pi, Pi]
- max_roll [double] [default=Pi<double>]: Maximum roll for codebook view sampling from sphere Note: Maximum roll must be in range (min_roll, min_roll + 2*Pi]
- min_pitch [double] [default=-Pi<double>/2]: Minimum pitch for codebook view sampling from sphere Note: Minimum pitch must be in range [-Pi/2, Pi/2]
- max_pitch [double] [default=Pi<double>/2]: Maximum pitch for codebook sampling from sphere Note: Maximum pitch must be in range (min_pitch, Pi/2].
- min_yaw [double] [default=-Pi<double>]: Minimum yaw for codebook view sampling from sphere Note: Minimum yaw must be in range [-Pi, Pi],
- max_yaw [double] [default=Pi<double>]: Maximum yaw for codebook view sampling from sphere Maximum yaw must be in range (min_yaw, min_yaw + 2*Pi]
- report_success [bool] [default=false]: If report_success is true, codelet reports success after view points are sampled. If not, the codelet ends the app after all view points are sampled.
- max_ticks_after_success [int] [default=10]: Number of ticks to wait after the view points are sampled to close the app This param is used when report_success is set to false.
isaac.object_pose_estimation.CodebookWriter
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- feature [TensorProto]: A rank-2 tensor containing the feature vector
- code [TensorProto]: A rank-1 tensor containing the code vector
Outgoing messages
- codebook [JsonProto]: A JSON array with two entries, one each for feature and code vector. Features and code vectors themselves are stored as arrays of floating point numbers.
- Parameters
isaac.object_pose_estimation.ImagePoseEncoder
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ColorCameraProto]: Input ColorCameraProto of full image which are used for extracting camera pinhole parameters
- input_detections [Detections2Proto]: List of bounding box detections to compute translation parameters in the codebook
- input_poses [Detections3Proto]: List of input object poses for storing orientation labels in the codebook.
Outgoing messages
- pose_encoding [TensorProto]: A rank-1, 32-bit float tensor with the following nine entries: 1 - 4: quaternions for the 3D orientation, 5: bounding box diagonal 6: rendered distance of the camera from the object along z 7: focal length of the camera used to generate codebook 8, 9: offset of bounding box center from pinhole center in image coordinates
- Parameters
isaac.object_pose_estimation.PoseEstimation
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ColorCameraProto]: Input color camera proto to get pinhole parameters.
- input_detections [Detections2Proto]: Input bounding box from any object detection model (YOLO/ResNet).
- codes [TensorProto]: Input code vectors coming from a codebook, assuming a list of quaternion/center offset/diagonal
- correlations [TensorProto]: Correlations for code vectors indicating how good the match was.
Outgoing messages
- output_poses [Detections3Proto]: Output poses and bounding box.
- Parameters
isaac.orb.ExtractAndVisualizeOrb
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ColorCameraProto]: Input image
- Outgoing messages
Parameters
- max_features [int] [default=500]: maximum number of features to extract
- fast_threshold [int] [default=20]: FAST threshold, lower means higher sensitivity. Note that this threshold controls how many features are extracted before filtering the features down to the requested number of max_features. Decrease this parameter if the resulting amount of features is too low (that is, constantly below max_features).
- grid_num_cells_linear [int] [default=8]: how many cells to split the image into for spatial regularization
- downsampling_factor [double] [default=0.7]: how much to reduce image size between ORB feature levels
- max_levels [int] [default=4]: maximum number of different ORB levels
isaac.perception.AprilTagsDetection
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ColorCameraProto]: RGB input image. Image should be undistorted prior to being passed in here.
Outgoing messages
- april_tags [FiducialListProto]: Output, List of AprilTag fiducials
Parameters
- max_tags [int] [default=50]: Maximum number of AprilTags that can be detected
- tag_dimensions [double] [default=0.18]: Tag dimensions, translation of tags will be calculated in same unit of measure
- tag_family [string] [default=”tag36h11”]: Tag family, currently ONLY tag36h11 is supported
isaac.perception.BirdViewProjection
Description
Unprojects a given 2-channel image from perspective to bird’s eye view. The codelet takes the following inputs - * ImageProto: 2-channel image which is to be unprojected to bird’s eye view * LatticeProto: Represents the gridmap information corresponding to the unprojected 2-channel
- ColorCameraProto: Required to obtain the pinhole model which corresponds to the input 2-channel image.
The codelet outputs an unprojected image along with a lattice proto message with the same parameter values as the input lattice proto, but with the timestamp of the input image. These messages are intended to be used by the local map as a representation of occupancy.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_image [ColorCameraProto]: Input proto message containing the color image. This is required to obtain the pinhole model
- input_image [ImageProto]: Incoming 2-channel float image
- gridmap_lattice [LatticeProto]: Input lattice proto. This contains relevant information about the gridmap corresponding to the input 2-channel image.
Outgoing messages
- bird_view_image [ImageProto]: Output bird view image
- synced_gridmap_lattice [LatticeProto]: Output lattice proto, published with the same timestamp as the bird view image. This contains the same parameter values as the input, but is mainly published so that the timestamps of the 2-channel image and the corresponding lattice match.
Parameters
- camera_frame [string] [default=”camera”]: The name of the camera frame
isaac.perception.CropAndDownsample
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ColorCameraProto]: Input image
Outgoing messages
- output_image [ColorCameraProto]: Cropped and resized output image
Parameters
- crop_mode [CropMode] [default=CropMode::kManual]: Parameter which determines if we should auto-crop the input image. If set to “AutomaticCrop”, the codelet would compute the maximum possible crop which matches the aspect ratio of the desired output specified by downsample. If the parameters crop_start and crop_size are explicitly set by the user, they would be reset by the codelet. If set to “ManualCrop”, the codelet uses the exact crop start and crop size as specified by the user.
- crop_start [Vector2i] [default=]: Top left corner (row, col) for crop
- crop_size [Vector2i] [default=]: Target dimensions (rows, cols) for crop.
- downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.
isaac.perception.CropAndDownsampleCuda
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ColorCameraProto]: Input image
Outgoing messages
- output_image [ColorCameraProto]: Cropped and resized output image
Parameters
- crop_start [Vector2i] [default=]: Top left corner (row, col) for crop
- crop_size [Vector2i] [default=]: Target dimensions (rows, cols) for crop.
- downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.
isaac.perception.DisparityToDepth
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- disparity [DepthCameraProto]: Receivers Disparity Image
- extrinsics [Pose3dProto]: camera pair extrinsics (right-to-left)
Outgoing messages
- depth [DepthCameraProto]: Publishers The converted depth in meters
- Parameters
isaac.perception.FiducialAsGoal
Description
Looks for a fiducial with a specific ID and uses it as a goal for the navigation stack. The goal can be computed relative to the fiducial based on different methods.
- “center”: The center of the fiducial is projected into the Z=0 plane and published as the goal point for the navigation stack.
- “pointing”: A ray is shot out of the center of the fiducial into the direction of the normal and intersected with the Z=0 ground plane. This happens up to a maximum distance of max_goal_tag_distance.
- “offset”: The fixed offset fiducial_T_goal is used to compute the goal based on the detected fiducial.
A goal or plan is published every time a fiducial detection is received. In case the fiducial is not found for longer than give_up_duration a stop command is sent.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- fiducials [FiducialListProto]: The input channel where fiducial detections are publishes
Outgoing messages
- goal [Goal2Proto]: The target fiducial as a goal
- plan [Plan2Proto]: The target fiducial as a simple plan with one waypoint
Parameters
- target_fiducial_id [string] [default=”tag36h11_9”]: The ID of the target fiducial
- give_up_duration [double] [default=1.0]: If the robot does not see the fiducial for this time period the robot is stopped
- mode [Mode] [default=Mode::kCenter]: Specifies how the robot will use the fiducial to compute its goal location.
- max_goal_tag_distance [double] [default=1.0]: The maximum distance the goal with be away from the tag
- robot_frame [string] [default=]: The name of the robot coordinate frame
- camera_frame [string] [default=]: The name of the camera coordinate frame
isaac.perception.ImageWarp
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ColorCameraProto]: The input image and its optical parameters. The parameters include focal length, principal point, radial and tangential distortion parameters, and projection type (perspective or fisheye).
Outgoing messages
- output_image [ColorCameraProto]: The output image and its optical parameters. The output parameters are set accordingly to the requested target camera model. For perspective they match the source model with no distortion. For an equirectengular model they are computed based on the choice of the pixel_density parameter.
Parameters
- down_scale_factor [int] [default=4]: Scaling of the displayed images in Sight. down_scale_factor the ratio of the size of the source image to the displayed image.
- gpu_id [int] [default=0]: The GPU device to be used for Warp360 CUDA operations. The default value of 0 suffices for cases where there is only one GPU, and is a good defualt when there is more than 1 GPU.
- output_model [ImageWarpOutputModel] [default=ImageWarpOutputModel::kPerspective]: The desired output camera model
- pixel_density [double] [default=]: For certain projections this parameter can be used to control the size of the output image. For an equirectangular projection the output size will be the field of view angles of the camera multiplied by this constant. If this parameter is not set the output size will be identical to the input size which can lead to undesired cropping or black bars.
- background_color [Vector3ub] [default=]: Some projections will not fully cover the output image space. In that case the background is black by default, but can be changed to the given color if desired.
isaac.perception.PointCloudAccumulator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- point_cloud [PointCloudProto]: Incoming proto messages used to subscribe to small, point cloud data samples to accumulate.
Outgoing messages
- accumulated_point_cloud [PointCloudProto]: Outgoing proto messages used to publish the accumulated point cloud.
Parameters
- point_count [int] [default=10000]: Number of accumulated points before publishing the point cloud. This parameter can be configured and changed at runtime. The point cloud is published when this number of accumulated points is reached.
isaac.perception.RangeScanFlattening
Description
Flattens a 3D range scan into a 2D range scan.
We assume that a range scan is made up out of vertical “slices” of beams which are rotated around the lidar at specific azimuth angles. For each azimuth angle all beams of the vertical slice are analysed and compared to a 2.5D world model to compute a single distance value for that azimuth angle. The pair of azimuth angle and distance are published as a “flat” range scan.
The 2.5D world model assumes that every location in the X/Y plane is either blocked or free. To compute that we assume a critical height slice relative to the lidar defined my a minimum and maximum height. If any return beam of the vertical slice hits an obstacle in that height slice the flat scan will report a hit. In addition to the height interval we also allow for a fudge on the pitch angle of the lidar which will be an additional rejection criteria. Essentially every beam return has to be inside the height slice not only for the beam angle alpha, but for all angles in the interval [alpha - fudge | alpha + fudge].
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scan [RangeScanProto]: Incoming 3D range scan
Outgoing messages
- flatscan [FlatscanProto]: Outgoing “flat” range scan
Parameters
- use_target_pitch [bool] [default=false]: Enables usage of target pitch parameter
- target_pitch [double] [default=]: If this value is set only beams with this pitch angle will be used; otherwise all beams of a vertical beam slice will be used.
- height_min [double] [default=0.0]: Minimum relative height for accepting a return as a collision.
- height_max [double] [default=1.5]: Maximum relative height for accepting a return as a collision.
- pitch_fudge [double] [default=0.005]: Inaccuracy of vertical beam angle which can be used to compensate small inaccuracies of the lidar inclination angle.
isaac.perception.RangeToPointCloud
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scan [RangeScanProto]: The range scan which is to be converted to a point cloud
Outgoing messages
- cloud [PointCloudProto]: The point cloud computed from the range scan
Parameters
- min_fov [double] [default=DegToRad(360.0)]: Number of rays to accumulate before sending out the message (in addition to min_count)
- min_count [int] [default=360]: Minimum number of points before sending a point cloud (in addition to min_fov)
- enable_visualization [bool] [default=false]: If set to true the point cloud is visualized with Sight
- sensor_frame [string] [default=”lidar”]: If set to true the point cloud is visualized with Sight
isaac.perception.ScanAccumulator
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scan [RangeScanProto]: Proto used to subscribe to partial scan lidar data
Outgoing messages
- fullscan [RangeScanProto]: Proto used to publish full scan lidar data
Parameters
- min_fov [double] [default=DegToRad(360.0)]: Minimum FOV before sending out the message (in addition to min_slice_count)
- min_slice_count [int] [default=1800]: Number of slices to accumulate before sending out the message (in addition to min_fov)
- clock_wise_rotation [bool] [default=true]: Turning direction of the LIDAR
isaac.perception.StereoDisparityNet
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- left [ColorCameraProto]: Receivers Left camera image
- right [ColorCameraProto]: Right camera image
Outgoing messages
- left_disparity [DepthCameraProto]: Publishers The inferred depth in meters
Parameters
- weights_file [string] [default=]: Configurable Parameters path to the weights file
- plan_file [string] [default=]: path to the plan file. plan file is specific to sm version of the GPU
- fp16_mode [bool] [default=false]: flag to turn on half precision for tensorrt. This is currently not supported on desktop gpus and will only work on tx2/xavier
isaac.perception.StereoImageSplitting
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- stereo [ColorCameraProto]: Input stereo image
Outgoing messages
- left [ColorCameraProto]: Output left image
- right [ColorCameraProto]: Output right image
Parameters
- copy_pinhole_from_source [bool] [default=true]: If true, the pinhole is copied from the source and the column count is adjusted to half the original column count.
- left_rows [int] [default=]: Number of pixels in the height dimension of left image
- left_cols [int] [default=]: Number of pixels in the width dimension of left image
- left_focal_length [Vector2d] [default=]: Focal length of the left image
- left_optical_center [Vector2d] [default=]: Optical center for the left image
- right_rows [int] [default=]: Number of pixels in the height dimension of left image
- right_cols [int] [default=]: Number of pixels in the width dimension of left image
- right_focal_length [Vector2d] [default=]: Focal length of the right image
- right_optical_center [Vector2d] [default=]: Optical center for the right image
isaac.planner.DifferentialBaseControl
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [DifferentialTrajectoryPlanProto]: Input: the plan to follow (contain a list of state at a given timestamp)
Outgoing messages
- cmd [StateProto]: Output a navigation::DifferentialBaseControl state message.
Parameters
- cmd_delay [double] [default=0.2]: Expected delay between the command sent and the execution (in second)
- use_pid_controller [bool] [default=true]: Whether or not use the pid controller
- manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
- pid_heading [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 1.0, -1.0})]: Parameters of pid controller that controls the heading error
- pid_pos_y [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the latteral error
- pid_pos_x [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the forward error
- controller_epsilon_gain [double] [default=1.0]: Gains used to compute the forward gain
- controller_b_gain [double] [default=1.0]: Gains used to compute the heading gain
- use_tick_time [bool] [default=true]: This flag controls whether or not this task uses tick time or the acquisition time to know which command to output. Note: Acquisition time should be used when the DifferentialTrajectoryPlanProto comes with a not synchronized source. cmd_delay should be used to estimate the full delay from when the odometry was computed to when the command is going to be executed on the system.
isaac.planner.DifferentialBaseLqrPlanner
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- odometry [Odometry2Proto]: Contains the odometry information required for planning (current speed, acceleration, etc.)
- global_plan [Plan2Proto]: Contains the target plan the local planner attempts to follow
Outgoing messages
- plan [DifferentialTrajectoryPlanProto]: Contains a series of poses to form the trajectory that is optimal to follow
Parameters
robot_model [string] [default=”navigation.shared_robot_model/SphericalRobotShapeComponent”]: Name of the robot model node
time_between_command_ms [int] [default=100]: Step size to be used in integrating the state
num_controls [int] [default=50]: Upper limit on the number of steps in the output trajectory plan
target_distance [double] [default=0.25]: parameters for the GridMapObstaclesLqr parameters Distance we would like to keep away from surroundings
speed_gradient_target_distance [double] [default=1.0]: How fast the target distance increases depending on the speed
min_distance [double] [default=0.1]: Distance we want to keep away from surroundings before kicking high penality
speed_gradient_min_distance [double] [default=0.0]: How fast the minimum distance increases depending on the speed.
gain_speed [double] [default=1.0]: parameters for the DifferentialLqr parameters Gain of a quadratic cost to penalize a speed outsde the range defined below
gain_steering [double] [default=0.0]: Gain of a quadratic cost to penalize any steering
gain_lat_acceleration [double] [default=0.2]: Gain of a quadratic cost to penalize the lateral acceleration
gain_linear_acceleration [double] [default=4.0]: Gain of a quadratic cost to penalize the forward acceleration
gain_angular_acceleration [double] [default=2.0]: Gain of a quadratic cost to penalize the angular acceleration
gain_to_target [double] [default=0.1]: Gain of a custom cost to penalize the robot according to its distance to the target
gain_to_end_position_x [double] [default=20.0]: Gain of a quadratic cost to penalize the last position in forward/backward direction relative to the target
gain_to_end_position_y [double] [default=50.0]: Gain of a quadratic cost to penalize the last position in lateral direction relative to the target
gain_to_end_angle [double] [default=1.0]: Gain of a quadratic cost to penalize the robot if its orientation does not match the target
gain_to_end_speed [double] [default=10.0]: Gain of a quadratic cost to penalize the robot if it is still moving
gain_to_end_angular_speed [double] [default=10.0]: Gain of a quadratic cost to penalize the robot if it is still rotating
max_angular_speed [double] [default=0.75]: Soft limit on how fast we are allowed to rotate
max_speed [double] [default=0.75]: Soft limit on how fast we would like to move
min_speed [double] [default=-0.0]: Soft limit on how slow we are allowed to move
distance_to_target_sigma [double] [default=1.0]: Other parameters: Parameter that controls the strength of the gradient depending on the distance of the target The error cost is of the form: d^2/(d^2 + s^2). It behaves as a quadratic cost close to the target and as a constant value far away from the target.
decay [double] [default=1.01]: Decay apply to each steps (decay < 1 means we accord higher importance to the beginning of the path while decay > 1 emphasizes the end of the path).
distance_to_waypoint [double] [default=1.0]: Maximum distance the end of the plan needs to be from a waypoint before trying to move to the next waypoint.
angle_to_waypoint [double] [default=DegToRad<double>(20.0)]: Maximum angle the end of the plan needs to be from a waypoint before trying to move to the next waypoint.
obstacle_names [std::vector<std::string>] [default={}]: List of obstacles to use for the planning. The ObstacleAtlas is querried.
use_predicted_position [bool] [default=true]: Indicates whether or not the predicted position or actual position is used while planning. If true, this produces a more stable path, however it relies on a good controller to keep the robot on track. If false, then this codelet also acts as a controller.
reset_robot_position [int] [default=0]: How frequently (in term of ticks) do we reset the robot position to the odometry: - 0: Disable it, never reset the robot position unless use_predicted_position is set to false. - 1: Always reset the robot position regardless of the value of use_predicted_position. - 10 Assuming use_predicted_position = true, means every 10 ticks we reset the robot position
to where the odometry predict the robot to be.
max_predicted_position_error [double] [default=0.5]: The distance from the predicted position we tolerate. If we exceed this value, the actual robot position is used.
manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
print_debug [bool] [default=false]: Specifies whether to show extra information in Sight for debug purposes
reuse_lqr_plan [bool] [default=true]: Specifies whether or not to use the previous plan as starting point for the lqr
restart_planning_cycle [int] [default=10]: How frequently (in term of ticks) do we restart the planning from scratch: - 0: Disable it, never restart it (unless reuse_lqr_plan is set to false) - 1: Never reuse the plan (regardless of the value of reuse_lqr_plan). - 10 Assuming reuse_lqr_plan = true, means every 10 ticks we drop the previous plan and
replan from a stopped position.
static_frame [string] [default=”world”]: Name of a frame which is static. This is used to compensate for the odometry drift.
isaac.planner.DifferentialBaseModel
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- robot_radius [double] [default=0.40]: The radius of the robot for colision detection.
- base_length [double] [default=0.63]: The distance between the two wheels
- wheel_radius [double] [default=0.2405]: The radius of the wheels
isaac.planner.GlobalPlanSmoother
Description
Creates a valid smooth global plan based on a given rough plan
A graph-based planning algorithm on a 3-dimensional state space (position + rotation) in a large domain often produces rough non-optimal plans with unnecessary corners and detours. This component takes such a rough path and smoothes it into a more optimal but still valid plan. The smooth plan can then for example be used as the input of a trajectory planner.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rough_plan [Plan2Proto]: A global plan which is potentially not smooth
Outgoing messages
- smooth_plan [Plan2Proto]: A valid smooth global plan computed based on the input global plan
Parameters
- robot_model [string] [default=”navigation.shared_robot_model/SphericalRobotShapeComponent”]: Name of the robot model node
- obstacle_names [std::vector<std::string>] [default=std::vector<std::string>({“map/isaac.navigation.DistanceMap”, “map/restricted_area”, “global_plan_local_map”})]: List of obstacles to use for the planning. The ObstacleAtlas is querried.
- backward_shortcut [bool] [default=false]: Whether we allow to shortcut moving backward
- distance_between_waypoints [double] [default=0.25]: The target distance between waypoints
- number_shortcut_iterations [int] [default=1000]: How many iterations we perform each tick to attempt to shortcut
- number_obstacle_avoidance_iterations [int] [default=50]: How many iterations we perform each tick to attempt to stay away from obstacles
- optimized_length [double] [default=50.0]: How much of the path are we optimizing: only the first X meters will be optimized
- target_clearance [double] [default=0.25]: Target clearance from the obstacles. If a waypoint is closer than this distance we try to move it in the normal direction of the path to reach the target clearance.
- maintain_distance_factor [double] [default=0.9]: When shortcutting, how much closer are we allowed to get to the obstacle. A value of zero means we can shortcut as much as we want as long as the path is valid, while a value of 1 means we can shortcut as long as either the start or end of the path is the closest to the obstacles.
isaac.planner.GlobalPlanner
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
- previous_path [Plan2Proto]: The previous plan the robot is following, if this plan is still valid, then the global planner will just output it back, if not, then the global plan will generate a new plan.
Outgoing messages
- plan [Plan2Proto]: The computed global plan
Parameters
- graph_initialization_steps [int] [default=20000]: How many random samples to use while pre-computing the graph.
- graph_in_tick_steps [int] [default=0]: How many random samples to use during each tick to increase the graph size.
- graph_max_steps [int] [default=5000]: How many random samples to use when no valid path exist.
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- static_obstacle_names [std::vector<std::string>] [default=std::vector<std::string>({“map/isaac.navigation.DistanceMap”, “map/restricted_area”})]: Name of the static obstacles. First one needs to be the one related to the global map. Note: these obstacles are assumed to be constant, if they change the planner needs to be stopped and restarted.
- dynamic_obstacle_names [std::vector<std::string>] [default={“global_plan_local_map”}]: Name of the dynamic obstacles. (Can be changed live)
- graph_file_in [string] [default=]: Path to a file containing the graph to load.
- graph_file_out [string] [default=”/tmp/graph.json”]: Path to a file where to save the file at the end
- model_error_margin [double] [default=0.05]: How close to obstacle the robot can be (in meters).
- model_max_translation_distance [double] [default=1.0]: Maximum distance between two points to be connected (in meters). A shorter distance produces a denser graph. In general a value in the order of the average distance of any point to the closest obstacle is recommended.
- model_max_rotation_distance [double] [default=TwoPi<double>]: Maximum rotation between two points to be connected (in meters). A shorter distance produces a denser graph.
- model_backward_path_penalty [double] [default=10.0]: The penality when moving backward
- model_invalid_path_penalty [double] [default=100.0]: The penality when moving into a dynamic obstacle.
- max_colliding_lookup [double] [default=0.5]: How much distance into obstacle do we tolerate for the starting position and the target.
- check_direct_path [bool] [default=true]: Whether we can connect directly the start and end position or if we should always use the graph to do the planning.
- world_dimensions [geometry::RectangleD] [default=]: Dimensions of the world. Random position will be sampled in this area. If not set, then it will be automatically computed using the obstacle map
isaac.planner.HolonomicBaseControl
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [DifferentialTrajectoryPlanProto]: Plan (position/time) the controller is trying to follow. TODO: Should not take a DifferentialTrajectoryPlanProto
Outgoing messages
- cmd [StateProto]: Output a navigation::DifferentialBaseControl state message.
Parameters
- cmd_delay [double] [default=0.2]: Expected delay between the command sent and the execution (in second)
- use_pid_controller [bool] [default=true]: Whether or not use the pid controller
- manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
- pid_heading [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 1.0, -1.0})]: Parameters of pid controller that controls the heading error
- pid_pos_y [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the latteral error
- pid_pos_x [Vector7d] [default=Vector7d((double[]){0.2, 0.05, 0.0, 0.1, -0.1, 2.0, -2.0})]: Parameters of pid controller that controls the forward error
- use_tick_time [bool] [default=true]: This flag controls whether or not this task uses tick time or the acquisition time to know which command to output. Note: Acquisition time should be used when the DifferentialTrajectoryPlanProto comes with a not synchronized source. cmd_delay should be used to estimate the full delay from when the odometry was computed to when the command is going to be executed on the system.
isaac.planner.SphericalRobotShapeComponent
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- circles [std::vector<geometry::CircleD>] [default={}]: List of circles that compose the robot
- smooth_minimum [double] [default=20.0]: Parameters to control how well the minimum function is approximated. The error will be in the range: D-1/smooth_minimum <= distance <= D where D = the real distance
isaac.pwm.PwmController
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- set_duty_cycle [PwmChannelSetDutyCycleProto]: PwmChannelSetDutyCycleProto is used to set a duty cycle for a PWM channel note: setting a PWM value for a channel automatically enables that channel duty_cycle as a percentage, from 0.00 to 1.00
- set_pulse_length [PwmChannelSetPulseLengthProto]: PwmChannelSetPulseLengthProto is used to set a pulse length for a PWM channel pulse_length as a percentage, from 0.00 to 1.00 of the cycle
- Outgoing messages
Parameters
- i2c_device_num [int] [default=0]: I2C device ID; matches /dev/i2c-X
- pwm_frequency_in_hertz [int] [default=50]: Defines the frequency at which the PWM outputs modulate, in hertz 50Hz is common for servos
isaac.rgbd_processing.DepthEdges
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [DepthCameraProto]: Depth image used for point and normal computation
Outgoing messages
- edges [ColorCameraProto]: Pixel edge likelihood stored as unit FP32
Parameters
- edge_jump_threshold [double] [default=0.06]: Threshold in meters after which a jump in distance between two pixels is considered an edge.
- min_depth [double] [default=]: Depth smaller or equal to the given value will be marked as edge
- max_depth [double] [default=]: Depth values larger or equal to the given value will be marked as edge.
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
isaac.rgbd_processing.DepthImageFlattening
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [DepthCameraProto]: Input depth image
Outgoing messages
- flatscan [FlatscanProto]: Output range scan
Parameters
- camera_frame [string] [default=”camera”]: The name of the camera coordinate frame
- ground_frame [string] [default=”ground”]: The name of the ground coordinate frame
- fov [double] [default=DegToRad(90.0)]: The field of view to use for the result range scan
- sector_delta [double] [default=DegToRad(0.5)]: Angular resolution of the result range scan
- min_distance [double] [default=0.2]: Minimum distance for the result range scan
- max_distance [double] [default=5.0]: Maximum distance for the result range scan
- range_delta [double] [default=0.10]: Range resolution of the result range scan
- cell_blocked_threshold [int] [default=10]: A sector in the result range scan is marked as blocked after the given number of points.
- height_min [double] [default=0.20]: Maximum height in ground coordinates in which a point is considered to be an obstacle
- height_max [double] [default=1.00]: Minimum height in ground coordinates in which a point is considered to be an obstacle
- skip_row [int] [default=0]: Number of pixels in row that are skipped while parsing the image
- skip_column [int] [default=0]: Number of pixels in column that are skipped while parsing the image
isaac.rgbd_processing.DepthImageToPointCloud
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [DepthCameraProto]: Input depth image
- color [ColorCameraProto]: Input color image to color points (optional)
Outgoing messages
- cloud [PointCloudProto]: The computed point cloud
Parameters
- use_color [bool] [default=false]: If this is enabled a color image will be used to produce a colored point cloud. This can only be changed at program start.
isaac.rgbd_processing.DepthNormals
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- points [ColorCameraProto]: Pixel points stored as a 3-channel FP32 image
- edges [ColorCameraProto]: Pixel edge likelihood stored as unit FP32
Outgoing messages
- normals [ColorCameraProto]: Pixel normals stored as a 3-channel FP32 image
Parameters
- normals_smooth_radius [int] [default=7]: Radius over which normals are smoothed
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
isaac.rgbd_processing.DepthPoints
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [DepthCameraProto]: Depth image used for point and normal computation
Outgoing messages
- points [ColorCameraProto]: Pixel points stored as a 3-channel FP32 image
Parameters
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
isaac.rgbd_processing.FreespaceFromDepth
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [DepthCameraProto]: Input image use to compute the range scan
Outgoing messages
- flatscan [FlatscanProto]: Output the freespace as a range scan that can be used for example to produce a local map for navigation
Parameters
- last_range_cell_additional_contribution [double] [default=2.5]: In order to favor the last cell in case there is no obstacle, we arbitrarily increase the value by this factor scaled by the average occupancy.
- edge_distance_cost [double] [default=0.5]: Factor to compute the cost of an edge (multiplied by the distance) Reducing this value might increase processing time.
- max_edge_cost [double] [default=1.0]: Cap on the maximum cost of an edge (Reducing this value might speed up the processing time.)
- max_contribution_after_wall [double] [default=2.5]: Once we hit a wall, we cap the value of a cell at: max_contribution_after_wall * average_weight
- wall_threshold [double] [default=5.0]: The minmum value needed for a cell to be considered as a wall (as a factor of the average value.)
- fov [double] [default=DegToRad(90.0)]: The field of view to use for the result range scan
- num_sectors [int] [default=180]: Angular resolution of the result range scan
- range_delta [double] [default=0.1]: Range resolution of the result range scan
- height_min [double] [default=-1.00]: Maximum height in ground coordinates in which a point is considered valid
- height_max [double] [default=2.00]: Minimum height in ground coordinates in which a point is considered valid
- max_distance [double] [default=20.0]: Max range for the extraction.
- reduce_scale [int] [default=2]: Reduction factor for image. Values greater than one shrink the image by that amount
- integrate_temporal_information [bool] [default=false]: Reduction factor for image. values greater than one shrink the image by that amount
- use_predicted_height [bool] [default=false]: Whether to use the predcted height (from measurement) or 0 when rendering the freespace
- camera_name [string] [default=]: Name of the camera used to get the camera position in the world
isaac.rl.TemporalBatching
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- step [TensorProto]: Receive 2 dimensional tuples, one row for each agent transition
Outgoing messages
- temporal_tensor [TensorProto]: Send the formed 3 dimensional tensor of size (lookback, num_agents, tuple size)
- temporal_tensor_list [TensorProto]: Send “num_agents” number of TensorLists to the SampleAccumulator. Each such TensorList is of dimension (lookback, tuple_size) derived for our 3-D tensor
Parameters
- num_agents [int] [default=1]: Specify number of the agents in simulation
- look_back [int] [default=1]: Specify the numbers of past steps we should store in our history
- dead_flag_location [int] [default=1]: The position of the dead flag in the transition tuple. Whenever an agent is dead, we need to clear its history and start afresh
isaac.ros_bridge.CameraImageToRos
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- frame_id [string] [default=”camera”]: This param will populate frame_id in ROS image message. Details at http://docs.ros.org/api/sensor_msgs/html/msg/Image.html
isaac.ros_bridge.CameraInfoToRos
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- frame_id [string] [default=”camera”]: This param will populate frame_id in ROS CameraInfo message. Details at http://docs.ros.org/api/sensor_msgs/html/msg/CameraInfo.html
isaac.ros_bridge.FlatscanToRos
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- frame_id [string] [default=”base_scan”]: Name of the frame to be used in outgoing message
isaac.ros_bridge.GoalToRos
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- goal_frame [string] [default=”map”]: Frame of the goal in outgoing message
- robot_frame [string] [default=”base_link”]: Frame of the robot in ROS. Used to stop the robot if needed.
isaac.ros_bridge.GoalToRosAction
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination
- odometry [Odometry2Proto]: The odometry information with current speed
Outgoing messages
- feedback [Goal2FeedbackProto]: Feedback regarding the goal
Parameters
- action_name [string] [default=”move_base”]: ROS namespace where action will be communicated to
- goal_frame_ros [string] [default=”map”]: Frame of the goal in outgoing ROS message
- robot_frame_ros [string] [default=”base_link”]: Frame of the robot in ROS. Used to stop the robot if needed.
- robot_frame_isaac [string] [default=”robot”]: Frame of the robot in Isaac. Used in publishing feedback pose.
- stationary_speed_thresholds [Vector2d] [default=Vector2d(0.025, DegToRad(5.0))]: Threshold on speed to determine if the robot is stationary (positional and angular)
isaac.ros_bridge.OdometryToRos
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- pose_frame [string] [default=”odom”]: Frame of the pose in outgoing message
- twist_frame [string] [default=”base_footprint”]: Frame of the twist in outgoing message
isaac.ros_bridge.PosesToRos
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- ros_node [string] [default=”ros_node”]: Name of the Isaac node with RosNode component Needs to be set before the application starts.
- pose_mappings [std::vector<IsaacRosPoseMapping>] [default={}]: A json object from configuration containing the poses to read from Isaac Pose Tree and write to
ROS. Left hand side (lhs_frame) corresponds to target_frame in tf2 notation. Right hand side (rhs_frame) corresponds to source_frame in tf2 notation. Layout:
- [
-
- {
-
- {
-
- “isaac_pose”: {
“lhs_frame”: “odom”, “rhs_frame”: “robot” - },
-
- “ros_pose”: {
“lhs_frame”: “odom”, “rhs_frame”: “base_footprint” }
}
}
]
isaac.ros_bridge.RosNode
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- ros_node_name [string] [default=”isaac_bridge”]: Node name that will appear in ROS node diagram
isaac.ros_bridge.RosToDifferentialBaseCommand
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
- Parameters
isaac.ros_bridge.RosToPoses
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- ros_node [string] [default=”ros_node”]: Name of the Isaac node with RosNode component Needs to be set before the application starts.
- pose_mappings [std::vector<IsaacRosPoseMapping>] [default={}]: A json object from configuration containing the poses to read from Isaac Pose Tree and write to
ROS. Left hand side (lhs_frame) corresponds to target_frame in tf2 notation. Right hand side (rhs_frame) corresponds to source_frame in tf2 notation. Layout:
- [
-
- {
-
- {
-
- “isaac_pose”: {
“lhs_frame”: “odom”, “rhs_frame”: “robot” - },
-
- “ros_pose”: {
“lhs_frame”: “odom”, “rhs_frame”: “base_footprint” }
}
}
]
isaac.sight.AliceSight
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
- Parameters
isaac.sight.SightWidget
Description
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- Outgoing messages
Parameters
- title [string] [default=]: The caption of the widget. If not specified the component name will be used
- type [Type] [default=]: The type of the widget (mandatory). Possible choices are: “2d”, “3d”, “plot”.
- dimensions [Vector2i] [default=]: The initial dimensions of the widget. If not specified sight will decide.
- channels [std::vector<Channel>] [default={}]: A list of channels to display on the sight widget. Channels have several parameters: * name: The name of the sight channel in the form: node_name/component_name/channel_name * active: If disabled the channel will not be drawn initially when the widget is created
- prepend_channel_name_with_app_name [bool] [default=true]: If enabled all channel names are prefixed with the app name.
- prepend_title_with_app_name [bool] [default=true]: If enabled the title of the widget witll be prefixed with the app name.
isaac.sight.WebsightServer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- port [int] [default=3000]: Port for the communication between web server and Sight
- webroot [string] [default=”packages/sight/webroot”]: Path to the files needed for Sight
- assetroot [string] [default=”../isaac_assets”]: Path to the assests like pictures
- bandwidth [int] [default=10000000]: Bandwidth to limit the rate of data transfer
- use_compression [bool] [default=false]: Whether to compress data for transfer
- ui_config [json] [default=(nlohmann::json{{“windows”, {}}})]: Configuration for User Interface (UI)
isaac.skeleton_pose_estimation.OpenPoseDecoder
Description
OpenPoseDecoder converts a tensor from OpenPose-type model into a list of Skeleton models Note: Because a modified OpenPose architecture is used, tensors are not compatible with the original paper.
OpenPose is a popular model architecture that allows 2D pose estimation of keypoints (or “parts”) of articulate and solid objects. Examples of such objects include humans, vehicles, animals, and robotic arms. Only a single type of object is normally supported by the model; however, multiple instances of the object are supported Note: OpenPose performs simultaneous detection and ‘skeleton model’ pose estimation of objects. In the following documentation, ‘objects’, ‘skeleton models’, and ‘skeletons’ may be used. For more information about the model, please refer to https://arxiv.org/pdf/1812.08008.pdf
OpenPoseDecoder takes in a multiple tensors from the Open Pose neural network. Specifically these tensors are used: Part Affinity Fields, Parts Gaussian Heatmaps, and Parts Gaussian Heatmaps MaxPool tensors. It uses Parts Gaussian Heatmaps and Parts Gaussian Heatmaps MaxPool to compute the PeakMap for detecting the potential key points for each object in the frame and outputs these keypoints as the vertex of a graph. The graph edges are made based on prior knowledge of the edges between object parts. It then uses the Part Affinity Fields tensor to make the graph weighted. The weighted graph contains all possible edges between candidates of two parts. Then a greedy algorithm specialized to the task is used to find the optimum edges based on the maximum score that can be obtained from the weights of the graph. It then refines the positions of final keypoints and publishes final graphs as a Skeleton2ListProto message.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- part_affinity_fields [TensorProto]: [0] : part_affinity_fields : PAFLayer = “lambda_2/conv2d_transpose”
- gaussian_heatmap [TensorProto]: [1] : gaussian_heatmap : GaussianHeatMapLayer = “lambda_3/tensBlur_depthwise_conv2d”
- maxpool_heatmap [TensorProto]: [2] : maxpool_heatmap : MaxPoolGHMLayer = “tensBlur/MaxPool”
Outgoing messages
- skeletons [Skeleton2ListProto]: A list of 2D pose estimations of skeleton models for detected objects (list of SkeletonProto). See SkeletonProto for more details.
Parameters
- label [string] [default=]: A string to initialize the ‘label’ field of the output SkeletonProto object. It should be set to match the type of object detected by the model (for example ‘human’).
- labels [std::vector<std::string>] [default=]: List of strings to use as detected joints labels. For example: [“Elbow”, “Wrist”, …] It is used to initialize the ‘label’ field of skeleton joints. Note, the order and size of this list of labels should match that of the gaussian_heatmap tensor (channels dimension).
- edges [std::vector<Vector2i>] [default=]: List of edges to detect (as edges of the skeleton model). Each edge is defined by a pair of indices into the labels array specified by the ‘labels’ parameter. Indices are zero-based. For example [[0, 1], [2, 3]] will define two edges with the first edge “Elbow” - “Wrist”. This list is configured at the training time of the model.
- edges_paf [std::vector<Vector2i>] [default=]: List of indices to channels of the part_affinity_fields tensor, to locate components of the parts affinity field. This list is ‘indexed by edge_id’ (so the order and size of this list should match that of the edges parameter. This list is configured at the model training time.
- threshold_heatmap [float] [default=]: Peak-map preprocessing threshold. Part-candidates below this threshold are discarded.
- threshold_edge_size [float] [default=]: PAF-candidate edge size. Connection-candidates below this threshold are discarded.
- threshold_edge_score [float] [default=]: PAF-candidate dot-product threshold. Connection-candidates below this threshold are discarded.
- threshold_edge_sampling_counter [int] [default=]: PAF-candidate counter threshold. Connection-candidates below this threshold are discarded. Number of times dot-product was larger than threshold_edge_score during edge_sampling_steps Note, it depends on edge_sampling_steps (should be smaller or equal to edge_sampling_steps).
- threshold_part_counter [int] [default=]: Final skeleton detection part counter threshold. Detections with fewer parts are discarded.
- threshold_object_score [float] [default=]: Final skeleton detection score threshold. Detections with lower threshold are discarded.
- threshold_split_score [float] [default=]: Final skeleton detection split threshold, objects with lower threshold are not merged.
- edge_sampling_steps [int] [default=]: Number of sampling steps to calculate line integral over the part affinity field. Note also: threshold_edge_sampling_counter.
- refine_parts_coordinates [bool] [default=]: Refine peaks of gaussian heatmap with “weighted coordinates” approach. The gaussian heatmap grid cells of adjacent to the initial peak are used to refine the peak position to get better estimates of parts coordinates. Note, the output of “refined parts coordinates” are floating point subpixel coordinates placed at “grid centers”, rather than integer rows and columns.
- output_scale [Vector2d] [default=]: Output scale for the decoded skeleton pose output. For example, this could be the image resolution (before downscaling to fit the network input tensor resolution). The format is [output_scale_rows, output_scale_cols]
isaac.stereo_depth.CoarseToFineStereoDepth
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- left_image [ColorCameraProto]: RGB input images. Images should be rectified and undistorted prior to being passed in here. RGB input left image
- right_image [ColorCameraProto]: RGB input right image
Outgoing messages
- left_depth_image [DepthCameraProto]: The inferred depth in meters (from the view of the left camera).
Parameters
- baseline [double] [default=0.12]: default baseline for the stereo camera (in meters) if no extrinsics provided
- min_depth [double] [default=0.0]: minimum depth of the scene (in meters)
- max_depth [double] [default=20.0]: maximum depth of the scene (in meters)
isaac.superpixels.RgbdSuperpixelCostMap
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- superpixels [SuperpixelsProto]: Superpixels used to segment the image
- labels [SuperpixelLabelsProto]: Superpixels labels used to label pixels
Outgoing messages
- occupancy_map_lattice [LatticeProto]: Cost map computed from obstacle superpixel
- occupancy_map [ImageProto]: Cost map computed from obstacle superpixel
Parameters
- costmap_frame [string] [default=”costmap”]: The name of the costmap frame
- superpixels_frame [string] [default=”superpixels”]: The name of the superpixels frame
- clear_radius [int] [default=10]: A small rectangular area around the robot with this radius is always marked as free to prevent the robot from seeing itself.
- cell_size [double] [default=0.035]: The size of a cell in the costmap
- dimensions [Vector2i] [default=Vector2i(64, 64)]: The dimensions of the costmap
- relative_offset [Vector2d] [default=Vector2d(0.125, 0.5)]: The zero position of the costmap frame inside the costmap array
isaac.superpixels.RgbdSuperpixels
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color [ColorCameraProto]: Color image used for superpixel computation
- depth [DepthCameraProto]: Depth image used for superpixel computation
- points [ColorCameraProto]: Pixel points
- edges [ColorCameraProto]: Pixel edges
- normals [ColorCameraProto]: Pixel normals
Outgoing messages
- superpixels [SuperpixelsProto]: The computed superpixels
Parameters
- seed_radius [int] [default=1]: Pixel radius over which initial superpixel features are averaged
- delta [int] [default=32]: The size of the region of influence of a superpixel
- px_expected_point_distance [double] [default=0.04]: Various parameters for superpixel computation
- px_expected_normal_distance [double] [default=0.05]:
- px_expected_color_distance [double] [default=0.25]:
- px_weight_point [double] [default=0.0]:
- px_weight_normal [double] [default=0.0]:
- px_weight_color [double] [default=3.0]:
- sp_expected_point_distance [double] [default=0.17]:
- sp_expected_normal_distance [double] [default=0.15]:
- sp_expected_color_distance [double] [default=0.27]:
- sp_weight_point [double] [default=1.0]:
- sp_weight_normal [double] [default=1.0]:
- sp_weight_color [double] [default=3.0]:
- regularization [double] [default=0.25]:
- smoothing [double] [default=1.0]:
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
- show_boundaries [bool] [default=true]: If enabled superpixel color visualization will show boundaries. This is slightly slower.
isaac.superpixels.SuperpixelImageLabeling
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- superpixels [SuperpixelsProto]: Superpixels used to segment the image
- labels [SuperpixelLabelsProto]: Superpixels labels used to label pixels
Outgoing messages
- segmentation [SegmentationCameraProto]: Computed segmentation which labels every pixel of the original camera image
Parameters
- label_invalid [int] [default=2]: The output label for pixels labeled as invalid for example used for pixels with invalid depth, or pixels which are not assigned to a superpixel.
isaac.utils.DetectionUnprojection
Description
Takes detections with bounding boxes in pixel coordinates and projects them into robot coordinates to output poses relative to the robot frame.
For a point of interest in camera image, we can get a 3D translation relative to the camera frame using (1) camera intrinsics, (2) depth information, and (3) location on the image. The question is which location to use. For each detection, we have a bounding box. Naive approach would be to pick only the center location. For robustness, we generalize this idea below.
- For each detection, we would like to focus around the center of bounding box, because every pixel of bounding box is not going to belong to the object of interest. 2. We get the region of interest (ROI) by shrinking bounding box using roi_scale. 3. Around each of the 4 corners of ROI, we create a small bounding box called unprojection_area. 4. We take average of points (represented in the camera frame) for every pixel of the 4 unprojection_areas to get our final estimate.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth_image [DepthCameraProto]: Input depth image to use to find real-world coordinates of bounding boxes
- detections [Detections2Proto]: Bounding box in pixel coordinates and class label of objects in an image
Outgoing messages
- detections_with_poses [Detections3Proto]: Output list of detections with their 3D poses populated by this codelet
Parameters
- roi_scale [double] [default=0.25]: Scale factor for getting the region of interest (ROI) from detection bounding box. Please see codelet summary above for details.
- spread [Vector2i] [default=Vector2i(10, 10)]: In pixels, half dimensions of the unprojection_areas in row and column. Please see codelet summary above for details.
- invalid_depth_threshold [double] [default=0.05]: Depth values smaller than this value are considered to be invalid.
isaac.utils.DetectionsToPoseTree
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of object detections made, potentially by Yolo using camera
- Outgoing messages
Parameters
- detection_frame [string] [default=]: Frame where detections are made
- label [string] [default=]: If set, we only write detection with this label to the pose tree.
isaac.utils.DifferentialTrajectoryToPlanConverter
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- original_trajectory [DifferentialTrajectoryPlanProto]: The original trajectory in some coordinate frame
Outgoing messages
- plan [Plan2Proto]: The computed plan in the desired coordinate frame
Parameters
- frame [string] [default=]: The desired frame in which to publish the plan
isaac.utils.FlatscanToPointCloud
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Input flatscan
Outgoing messages
- cloud [PointCloudProto]: Output 3D point cloud
- Parameters
isaac.utils.Plan2Converter
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- original_plan [Plan2Proto]: The original plan in some coordinate frame
Outgoing messages
- plan [Plan2Proto]: The computed plan in the desired coordinate frame
Parameters
- frame [string] [default=]: The desired frame in which to publish the plan
isaac.utils.Pose2GaussianDistributionEstimation
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- samples [Pose2Samples]: A list of samples of Pose2 type
Outgoing messages
- mean_and_covariance [Pose2MeanAndCovariance]: Mean and covariance of the received pose samples
Parameters
- lhs_frame [string] [default=]: If set the mean will also be written to the pose tree. The name of the target frame is composed from the parameters lhs_frame and rhs_frame as: lhs_frame_T_rhs_frame.
- rhs_frame [string] [default=]: See comment for lhs_frame.
isaac.utils.PoseMonitor
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- report [Json]: The log of poses as json message.
Parameters
- reference_frame [string] [default=”world”]: Name of the reference frame.
- pose_names [std::vector<std::string>] [default={}]: List of names for the poses to report.
isaac.utils.PoseTreeFeed
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- pose [PoseTreeEdgeProto]: proto edge, including the lhs, rhs, and the pose
- Parameters
isaac.utils.RigidBodiesToDetections
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bodies [RigidBody3GroupProto]: Input information regarding rigid bodies in 3D
Outgoing messages
- detections [Detections3Proto]: Output list of objects with poses in 3D
Parameters
- confidence [double] [default=0.99]: Output detections will have this prediction confidence
isaac.utils.SendTextMessages
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- text_output [ChatMessageProto]: Sends out a single text string periodically.
Parameters
- text_list [std::vector<std::string>] [default=]: List of text messages to send
- initial_delay [double] [default=0.0]: Delay (in seconds) before publishing the first text message
isaac.utils.WaitUntilDetection
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of object detections made, potentially by Yolo using camera
- Outgoing messages
Parameters
- label [string] [default=]: If set, we wait until a detection with this label is made. Otherwise, we report success after any detection.
isaac.velodyne_lidar.VelodyneLidar
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- scan [RangeScanProto]: A range scan slice published by the Lidar
Parameters
- ip [string] [default=”192.168.2.201”]: The IP address of the Lidar device
- port [int] [default=2368]: The port at which the Lidar device publishes data.
- type [VelodyneModelType] [default=VelodyneModelType::VLP16]: The type of the Lidar (currently only VLP16 is supported).
isaac.viewers.BinaryMapViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- binary_map [ImageProto]: The binary map to visualize with sight (Image1ub, 0 means free and 255 occupied)
- binary_map_lattice [LatticeProto]: Lattice information of the binary map
- Outgoing messages
Parameters
- min_interval [double] [default=0.05]: The minimum time which has to elapse before we publish data to sight again.
- smooth_boundary [bool] [default=true]: If enabled boundary and interior visualization will use smooth boundaries
isaac.viewers.ColorCameraViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_listener [ColorCameraProto]: 8-bit RGB color camera to visualize
- Outgoing messages
Parameters
- target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight.
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor.
- use_png [bool] [default=false]: Renders tensor image as PNG if true, otherwise renders as JPG
- camera_name [string] [default=”“]: Frame of the camera (to get the position from the PoseTree)
isaac.viewers.DepthCameraViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth_listener [DepthCameraProto]: 32-bit float depth image to visualize
- Outgoing messages
Parameters
- target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor
- min_visualization_depth [double] [default=0.0]: Minimum depth in meters used in color grading the depth image for visualization
- max_visualization_depth [double] [default=32.0]: Maximum depth in meters used in color grading the depth image for visualization
- colormap [std::vector<Vector3i>] [default=]: A color gradient used for depth visualization. The min_visualization_depth gets mapped to the first color, the max gets mapped to last color. Everything else in between gets interpolated.
- camera_name [string] [default=”“]: Name of the camera used to get the camera pose from the pose tree (optional)
- enable_depth_point_cloud [bool] [default=false]: Enable depth point cloud visualization, can slow down sight if too many points are being drawn
isaac.viewers.Detections3Viewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of detections with their 3D poses in robot frame
- Outgoing messages
Parameters
- radius [double] [default=]: Radius used when visualizing the detections TODO: Can we get this value from DetectionUnprojection?
- mesh_name [string] [default=]: Name of the mesh in sight
- object_T_box_center [Pose3d] [default=]: Position of the center of the bounding box.
- box_dimensions [Vector3d] [default=]: Dimensions of the bounding box.
- detections_color [Vector4ub] [default=Vector4ub(118, 185, 0, 255)]: Color of the detections
- frame [string] [default=]: Reference frame of the detection. TODO(ben): this should come from the Detections3Proto.
isaac.viewers.DetectionsViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections2Proto]: Bounding box in pixel coordinates and class label of objects in an image
- Outgoing messages
Parameters
- reduce_scale [int] [default=1]: Reduction factor for bounding boxes, values greater than one will shrink the box by that amount. Should match the factor of the image being drawn upon.
- border_background_color [Pixel3ub] [default=Colors::Black()]: Background border color for the bounding boxes
- border_foreground_color [Pixel3ub] [default=Colors::NvidiaGreen()]: Foreground border color for the bounding boxes
- border_background_width [double] [default=4.0]: Background border width for the bounding boxes
- border_foreground_width [double] [default=2.0]: Foreground border width for the bounding boxes
- font_size [double] [default=30.0]: Foxt size for the class label displayed
- textbox_height [double] [default=35.0]: Height of the textbox in which the class label is displayed
- minimum_textbox_width [double] [default=80.0]: Minimum width for the textbox. If the bounding box width is greater than this, the textbox width will be set to the bounding box width instead.
isaac.viewers.FiducialsViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- fiducials [FiducialListProto]: The input channel to receive fiducial detections
- Outgoing messages
Parameters
- text_size [double] [default=30.0]: The size of the text used in sight, in pixels (px)
- id_specific_configs [Json] [default=Json::object()]: An optional json object for configuring tag visualization for certaing tags. Currently supported parameters are “color_fill”, “text_below”, and “text_above”. Color needs to be in a valid JavaScript format, e.g., “#C0C0C0”, “#C0C0C05C”, “rgb(255, 99, 71), or “white”. Here is an example layout:
{
"tag36h11_6": { "text_below": "Metal", "color_fill": "#C0C0C05C" },
"tag36h11_7": { "text_below": "Compost", "color_fill": "#FFA95F5C" },
"tag36h11_8": { "text_below": "Paper", "color_fill": "#F2EECB5C" },
"tag36h11_9": { "text_below": "Paper -->", "text_above": "<-- Metal" }
}
isaac.viewers.FlatscanViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming range scan used to localize the robot
- Outgoing messages
Parameters
- beam_skip [int] [default=4]: The number of beams which are skipped for visualization
- map [string] [default=”map”]: Map node to use for localization
- range_scan_model [string] [default=”shared_robot_model”]: Name of the robot model node
- flatscan_frame [string] [default=”lidar”]: Frame which flatscan is defined at
isaac.viewers.GoalViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
- Outgoing messages
Parameters
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- robot_frame [string] [default=”robot”]: Name of robot’s frame
isaac.viewers.MosaicViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- Outgoing messages
Parameters
- tile_dimensions [Vector2i] [default=Vector2i(360, 640)]: Dimensions of one tile in the mosaic. Images will be resized to fit the tile.
- tiles_per_column [int] [default=2]: Number of tiles per row in the mosaic.
- margin [int] [default=10]: Number of pixels of the margin for each panel
- colormap [std::vector<Vector3ub>] [default=]: List of colors for the margin and text of each panel
isaac.viewers.OccupancyMapViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- occupancy_map [ImageProto]: The occupancy map to visualize with sight
- occupancy_map_lattice [LatticeProto]: The occupancy lattice information about the grid
- Outgoing messages
Parameters
- min_interval [double] [default=0.05]: The minimum time which has to elapse before we publish data to sight again.
isaac.viewers.PointCloudViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- cloud [PointCloudProto]: The point cloud which will be visualized in sight.
- Outgoing messages
Parameters
- target_fps [double] [default=10.0]: Maximum framerate at which images are displayed in sight.
- skip [int] [default=11]: If set to a value greater than 1 points will be skipped. For example skip = 2 will skip half of the points. Use this value to limit the number of points visualized in sight.
- max_distance [double] [default=5.0]: Points which have a depth (z-component) greater than this value will be skipped
- frame [string] [default=]: The coordinate frame in which the point cloud is visualized.
isaac.viewers.SegmentationCameraViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segmentation_listener [SegmentationCameraProto]: The segmentation_listener object receives 8 bit class, 16 bit instance and class label (string, int pair) information from a SegmentationCameraProto message
- Outgoing messages
Parameters
- target_fps [double] [default=30.0]: Target FPS used to show images to sight, decrease to reduce overall bandwidth needed
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that amount
- camera_name [string] [default=]: Frame of the camera (to get the position from the PoseTree)
isaac.viewers.SegmentationViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color [ColorCameraProto]: The original camera image
- segmentation [SegmentationCameraProto]: Pixel-wise image segmentation which is overlayed on top of the camera image
- Outgoing messages
Parameters
- max_fps [double] [default=20.0]: Maximum FPS for show images to sight which can be used to reduce overall bandwidth
- reduce_scale [int] [default=2]: Reduction factor for image, values greater than one will shrink the image by that amount
- highlight_label [int] [default=0]: The label which will be overlayed on top of the color image.
- highlight_color [Pixel3ub] [default=Pixel3ub(255, 255, 255)]: The color which is used to overlay the label.
- opacity [double] [default=0.5]: Opacity (0.0: full transparent, 1.0: full overlay) of the overlayed labels
- camera_name [string] [default=]: Frame of the camera (to get the position from the PoseTree)
isaac.viewers.SkeletonViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- skeletons [Skeleton2ListProto]: A list of skeleton models.
- Outgoing messages
Parameters
- labels [std::vector<std::string>] [default=]: List of joints labels to render (as joints of the skeleton model). For example: [“Elbow”, “Wrist”, …]
- edges_render [std::vector<Vector2i>] [default=]: List of edges to render (as edges of the skeleton model). Each edge is defined by a pair of indices into the labels array specified by the ‘labels’ parameter. Indices are zero-based. For example [[0, 1], [2, 3]] will define two edges with the first edge “Elbow” - “Wrist”.
isaac.viewers.TensorViewer
Description
“Flattens” and colorizes a tensor into an image and visualizes them with sight. Depending on the element type and rank of the tensor different visualization techniques are used.
- Element type:
- 32-bit floating points are colorized with StarryNightColorGradient using the range specified by the parameter range
- 32-bit integers are colorized using a standard set of random colors
- Rank:
- A rank 1 tensor is reformatted into a rank-2 tensor with tile_columns number of columns. If tile_columns is not specified only a single row will be used.
- A rank 2 tensor is visualized directly using its dimensions.
- A rank 3 tensor is visualized as stitched slices. Slices are extracted based on the storage order. The tile_columns parameters defines how many tiles are used horizontally for the stitched mosaic.
- Tensors with rank 4 or higher are not supported.
Note that dimensions of 1 are ignored, e.g. a 1x1x8 tensor is considered to have rank 1.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensor [TensorProto]: A tensor to visualize in sight. Multiple different formats are supported as explained in the class comment.
Outgoing messages
- colorized [ImageProto]: The computed image which is shown via sight is also published as a message on this channel.
Parameters
- tile_columns [int] [default=]: Number of columns in the resulting mosaic image
- rank_3_as_color [bool] [default=false]: If enabled a rank three tensor will be interpreted as a 3-channel RBG image. Otherwise one tile will be generated per channel slices.
- storage_order [TensorViewerStorage] [default=TensorViewerStorage::kPlanar]: Defines how a rank 3 tensor is sliced for visualization.
- range [Vector2d] [default=Vector2d(0.0, 1.0)]: For floating-point tensors values will be clamped to this range.
- render_size [Vector2i] [default=]: Optionally enlarge or shrink the resulting image before visualization with sight.
- use_png [bool] [default=false]: Renders tensor image as PNG if true, otherwise renders as JPG
isaac.viewers.TrajectoryListViewer
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- trajectories [Vector3TrajectoryListProto]: The input channel to receive all trajectories to be displayed.
- Outgoing messages
Parameters
- renderer_frame [string] [default=”world”]: Renderer frame to transform the trajectories per their respective frames.
isaac.ydlidar.YdLidar
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- flatscan [FlatscanProto]: A flat scan from the LIDAR Average message covers about 0.4 radians and contains 40 measurements Average message publish rate is 120 messages per second
Parameters
- device [string] [default=”/dev/ttyUSB0”]: Serial port where device is connected
isaac.yolo.YoloTensorRTInference
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rgb_image [ColorCameraProto]: Input image
Outgoing messages
- bounding_box_tensor [TensorProto]: Output tensorlist from Yolo TensortInference in the format
- proto[0]- Bounding Box Parameters which include
{bounding_box2{x1, y1, x2, y2}, objectness, {probability0, probability1… probability<N>}} …..{bounding_box<K>{x1, y1, x2, y2}, objectness, {probability0, probability1, probability2…. probability<N>}}} N : Number of classes network is trained on K : Number of Bounding Boxes predicted bounding_box<K> - Minimum and maximum (x, y) coordinates
- net_config_tensor [TensorProto]: proto[1] - Network config parameters which include {network_width, network_height, image_width, image_height, number_classes trained on, number of parameters for each bounding box{excluding probability of classes}
Parameters
- yolo_config_json [json] [default=nlohmann::json({})]: Yolo config json
isaac.zed.ZedImuReader
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
Outgoing messages
- imu_raw [ImuProto]: IMU data (if available) This is perfomed on every tick, so IMU poll rate is equal to the codelet tick frequency
- imu_T_left_camera [Pose3dProto]: IMU to left camera transformation It contains rotation and translation between the IMU and left camera frames
Parameters
- imu_T_camera_publication_rate [int] [default=2]: There’s no practical need to publish imu_T_camera transformation on every tick So the transform is published every nth tick
- imu_translation_scaling_factor [double] [default=1.0e3]: ZED SDK <= 2.8.3 has a bug - the reported IMU translation is incorrectly scaled by 1.0e-3 https://github.com/stereolabs/zed-examples/issues/192