Component API Overview¶
This following section contains a list of all components which are available in Isaac SDK. For each component, the incoming and outgoing message channels and the corresponding message types are listed. Additionally, all parameters with their names and types and corresponding default values are explained.
The following table gives an overview over all components. The columns ‘# Incoming’, ‘# Outgoing’ and ‘# Parameters’ indicate how many incoming message channels, outgoing message channels and parameters the corresponding module has.
Namespace | Name | # Incoming | # Outgoing | # Parameters |
---|---|---|---|---|
isaac | AdafruitNeoPixelLedStrip | 1 | 0 | 1 |
isaac | ArgusCsiCamera | 0 | 2 | 5 |
isaac | BlackFrameGenerator | 1 | 1 | 2 |
isaac | ImageComparison | 2 | 0 | 2 |
isaac | Joystick | 0 | 1 | 7 |
isaac | LandmarksViewer | 2 | 0 | 2 |
isaac | LivoxLidar | 0 | 1 | 4 |
isaac | PanTiltDriver | 1 | 2 | 14 |
isaac | RealsenseCamera | 0 | 8 | 16 |
isaac | SegwayRmpDriver | 1 | 1 | 5 |
isaac | SerialBMI160 | 0 | 1 | 1 |
isaac | SimpleLed | 0 | 1 | 0 |
isaac | SlackBot | 1 | 1 | 2 |
isaac | StereoVisualOdometry | 5 | 5 | 9 |
isaac | TorchInferenceTestDisplayOutput | 1 | 0 | 0 |
isaac | TorchInferenceTestSendInput | 0 | 1 | 1 |
isaac | V4L2Camera | 0 | 2 | 16 |
isaac | Vicon | 0 | 2 | 3 |
isaac | ZedCamera | 0 | 7 | 22 |
isaac.atlas | Pose2Comparer | 0 | 0 | 5 |
isaac.atlas | PoseFromFile | 0 | 0 | 3 |
isaac.atlas | PoseMessageInjector | 1 | 0 | 1 |
isaac.atlas | PoseToFile | 0 | 0 | 3 |
isaac.atlas | PoseToMessage | 0 | 1 | 2 |
isaac.atlas | PoseTreeRelink | 0 | 0 | 3 |
isaac.atlas | PoseTreeRemoveLink | 0 | 0 | 2 |
isaac.atlas | PublishMapRoi | 2 | 1 | 2 |
isaac.behavior_tree | WaitUntilMessage | 0 | 0 | 1 |
isaac.cask | Recorder | 0 | 0 | 8 |
isaac.cask | Replay | 0 | 0 | 6 |
isaac.cask | ReplayBridge | 1 | 1 | 3 |
isaac.composite | CompositeAtlas | 0 | 0 | 1 |
isaac.composite | CompositeMetric | 0 | 0 | 4 |
isaac.composite | CompositePublisher | 0 | 1 | 5 |
isaac.composite | FollowPath | 2 | 1 | 3 |
isaac.controller | AckermannControl | 2 | 1 | 13 |
isaac.controller | DifferentialBaseControl | 1 | 1 | 11 |
isaac.controller | HolonomicBaseControl | 1 | 1 | 7 |
isaac.controller | MultiJointController | 2 | 1 | 3 |
isaac.deepstream | Pipeline | 0 | 0 | 1 |
isaac.detect_net | DetectNetDecoder | 2 | 1 | 7 |
isaac.dummy | DummyPose2dConsumer | 1 | 0 | 1 |
isaac.dummy | DummyPose2dProducer | 0 | 1 | 0 |
isaac.dummy | DummyStatus | 0 | 0 | 1 |
isaac.dynamixel | DynamixelDriver | 1 | 1 | 10 |
isaac.egm_fusion | EvidenceGridMapViewer | 2 | 0 | 1 |
isaac.egm_fusion | EvidenceMapFusion | 2 | 1 | 7 |
isaac.egm_fusion | EvidenceMapInpaint | 2 | 1 | 3 |
isaac.egm_fusion | EvidenceToBinaryMap | 1 | 1 | 3 |
isaac.egm_fusion | RangeScanToEvidenceMap | 2 | 1 | 2 |
isaac.evaluators | AprilTagsEvaluator | 2 | 0 | 3 |
isaac.evaluators | ColorEvaluator | 1 | 1 | 7 |
isaac.evaluators | ColorSanityEvaluator | 2 | 0 | 7 |
isaac.evaluators | DepthEvaluator | 2 | 0 | 9 |
isaac.evaluators | DepthSanityEvaluator | 2 | 0 | 5 |
isaac.evaluators | FrameRateEvaluator | 2 | 0 | 4 |
isaac.fiducials | AprilTags3Detection | 2 | 1 | 2 |
isaac.fiducials | AprilTagsDetection | 2 | 1 | 3 |
isaac.fiducials | FiducialAsGoal | 1 | 2 | 6 |
isaac.flatscan_localization | GradientDescentLocalization | 1 | 0 | 4 |
isaac.flatscan_localization | GridSearchLocalizer | 0 | 0 | 12 |
isaac.flatscan_localization | MonitorEvaluatorFrequency | 1 | 1 | 2 |
isaac.flatscan_localization | MonitorEvaluatorMapscan | 1 | 1 | 6 |
isaac.flatscan_localization | MonitorEvaluatorSvio | 0 | 1 | 6 |
isaac.flatscan_localization | ParticleFilterLocalization | 2 | 1 | 12 |
isaac.flatscan_localization | ParticleSwarmLocalization | 1 | 0 | 6 |
isaac.flatsim | DifferentialBasePhysics | 1 | 1 | 8 |
isaac.flatsim | DifferentialBaseSimulator | 2 | 2 | 7 |
isaac.flatsim | FlatscanNoiser | 1 | 1 | 7 |
isaac.flatsim | SimRangeScan | 0 | 1 | 7 |
isaac.flatsim | TraceOccupancyMap | 2 | 1 | 6 |
isaac.fuzzy | EfllFuzzyEngineExample | 0 | 0 | 0 |
isaac.fuzzy | LfllFuzzyEngineExample | 0 | 0 | 0 |
isaac.gtc_china | PanTiltGoto | 1 | 1 | 3 |
isaac.hgmm | HgmmPointCloudMatching | 1 | 0 | 10 |
isaac.imu | IioBmi160 | 0 | 1 | 2 |
isaac.imu | ImuCalibration2D | 1 | 0 | 3 |
isaac.imu | ImuCorrector | 1 | 1 | 3 |
isaac.imu | ImuSim | 1 | 1 | 7 |
isaac.json | JsonMockup | 0 | 2 | 4 |
isaac.json | JsonReplay | 0 | 1 | 1 |
isaac.json | JsonTcpClient | 0 | 0 | 5 |
isaac.json | JsonToProto | 1 | 1 | 0 |
isaac.json | JsonWriter | 2 | 0 | 3 |
isaac.json | ProtoToJson | 1 | 1 | 0 |
isaac.kaya | KayaBaseDriver | 2 | 2 | 4 |
isaac.kinova_jaco | KinovaJaco | 2 | 2 | 8 |
isaac.laikago | LaikagoDriver | 1 | 2 | 5 |
isaac.lidar_slam | Cartographer | 1 | 0 | 6 |
isaac.lidar_slam | GMapping | 2 | 0 | 15 |
isaac.lqr | AckermannLqrPlanner | 3 | 1 | 14 |
isaac.lqr | DifferentialBaseLqrPlanner | 2 | 1 | 36 |
isaac.lqr | MultiJointLqrPlanner | 0 | 0 | 7 |
isaac.map | AdditionFlatmapCost | 0 | 0 | 0 |
isaac.map | DirectedAreaFlatmapCost | 0 | 0 | 15 |
isaac.map | KinematicTree | 0 | 0 | 1 |
isaac.map | KinematicTreeToPoseTree | 1 | 0 | 2 |
isaac.map | Map | 0 | 0 | 2 |
isaac.map | MapBridge | 1 | 1 | 0 |
isaac.map | MinimumFlatmapCost | 0 | 0 | 0 |
isaac.map | MultiplicationFlatmapCost | 0 | 0 | 0 |
isaac.map | ObstacleAtlas | 0 | 0 | 1 |
isaac.map | OccupancyFlatmapCost | 0 | 0 | 5 |
isaac.map | OccupancyGridMapLayer | 0 | 0 | 3 |
isaac.map | PolygonFlatmapCost | 0 | 0 | 7 |
isaac.map | PolygonMapLayer | 0 | 0 | 5 |
isaac.map | PolylineFlatmapCost | 0 | 0 | 10 |
isaac.map | Spline | 0 | 1 | 1 |
isaac.map | SurfletModelAtlas | 0 | 0 | 1 |
isaac.map | SurfletModelPublisher | 0 | 1 | 5 |
isaac.map | WaypointMapLayer | 0 | 0 | 1 |
isaac.message_generators | BinaryTensorGenerator | 0 | 1 | 2 |
isaac.message_generators | CameraGenerator | 0 | 3 | 2 |
isaac.message_generators | CameraIntrinsicsGenerator | 1 | 1 | 4 |
isaac.message_generators | ConfusionMatrixGenerator | 0 | 1 | 0 |
isaac.message_generators | Detections2Generator | 0 | 1 | 1 |
isaac.message_generators | Detections3Generator | 0 | 1 | 4 |
isaac.message_generators | DifferentialBaseControlGenerator | 0 | 1 | 2 |
isaac.message_generators | DifferentialBaseStateGenerator | 0 | 1 | 4 |
isaac.message_generators | FlatscanGenerator | 0 | 1 | 6 |
isaac.message_generators | HolonomicBaseControlGenerator | 0 | 1 | 3 |
isaac.message_generators | ImageFeatureGenerator | 0 | 3 | 4 |
isaac.message_generators | ImageLoader | 0 | 2 | 12 |
isaac.message_generators | LatticeGenerator | 0 | 1 | 5 |
isaac.message_generators | PanTiltStateGenerator | 0 | 1 | 8 |
isaac.message_generators | ParkingSpotListGenerator | 0 | 1 | 5 |
isaac.message_generators | Plan2Generator | 0 | 2 | 7 |
isaac.message_generators | PointCloudGenerator | 0 | 1 | 5 |
isaac.message_generators | Polyline2Generator | 0 | 1 | 1 |
isaac.message_generators | PoseGenerator | 0 | 0 | 4 |
isaac.message_generators | RangeScanGenerator | 0 | 1 | 11 |
isaac.message_generators | RigidBody3GroupGenerator | 0 | 1 | 7 |
isaac.message_generators | TensorGenerator | 0 | 1 | 2 |
isaac.message_generators | TrajectoryListGenerator | 0 | 1 | 4 |
isaac.ml | BoundingBoxPadding | 1 | 1 | 2 |
isaac.ml | ColorCameraEncoderCpu | 1 | 1 | 4 |
isaac.ml | ColorCameraEncoderCuda | 1 | 1 | 5 |
isaac.ml | ConfusionMatrixAggregator | 1 | 1 | 1 |
isaac.ml | Detection3Encoder | 1 | 1 | 1 |
isaac.ml | DetectionComparer | 2 | 1 | 2 |
isaac.ml | DetectionEncoder | 1 | 1 | 2 |
isaac.ml | Detections3Comparer | 2 | 1 | 1 |
isaac.ml | FilterDetections | 1 | 1 | 5 |
isaac.ml | GenerateKittiDataset | 2 | 0 | 3 |
isaac.ml | HeatmapDecoder | 1 | 1 | 2 |
isaac.ml | HeatmapEncoder | 1 | 1 | 0 |
isaac.ml | ImageDetectionExtraction | 2 | 1 | 3 |
isaac.ml | LabelToBoundingBox | 3 | 1 | 2 |
isaac.ml | ResizeDetections | 1 | 1 | 2 |
isaac.ml | RigidbodyToDetections3 | 1 | 1 | 1 |
isaac.ml | SampleAccumulator | 0 | 0 | 3 |
isaac.ml | SegmentationComparer | 2 | 1 | 2 |
isaac.ml | SegmentationDecoder | 1 | 1 | 1 |
isaac.ml | SegmentationEncoder | 2 | 1 | 7 |
isaac.ml | Teleportation | 1 | 2 | 26 |
isaac.ml | TensorArgMax | 1 | 1 | 2 |
isaac.ml | TensorChannelSum | 1 | 1 | 2 |
isaac.ml | TensorRTInference | 0 | 0 | 14 |
isaac.ml | TensorReshape | 1 | 1 | 1 |
isaac.ml | TensorflowInference | 0 | 0 | 4 |
isaac.ml | TorchInference | 0 | 0 | 4 |
isaac.ml | WaitUntilDetection | 1 | 1 | 2 |
isaac.monitor | MaxValueGradeFusion | 0 | 0 | 0 |
isaac.monitor | Monitor | 0 | 0 | 3 |
isaac.navigation | BinaryToDistanceMap | 2 | 1 | 5 |
isaac.navigation | CollisionMonitor | 1 | 1 | 3 |
isaac.navigation | DetectionsToAtlas | 1 | 0 | 1 |
isaac.navigation | DifferentialBaseMockup | 1 | 2 | 8 |
isaac.navigation | DifferentialBaseOdometry | 1 | 1 | 6 |
isaac.navigation | DifferentialBaseWheelImuOdometry | 2 | 1 | 9 |
isaac.navigation | DistanceMap | 0 | 0 | 1 |
isaac.navigation | FollowPath | 2 | 1 | 7 |
isaac.navigation | GoToBehavior | 1 | 0 | 0 |
isaac.navigation | GoalMonitor | 2 | 1 | 3 |
isaac.navigation | GoalToPlan | 1 | 1 | 1 |
isaac.navigation | GotoWaypointBehavior | 1 | 0 | 2 |
isaac.navigation | HolonomicBaseWheelImuOdometry | 2 | 1 | 8 |
isaac.navigation | MapWaypointAsGoal | 1 | 1 | 2 |
isaac.navigation | MapWaypointAsGoalSimulator | 1 | 0 | 3 |
isaac.navigation | MapWaypointsAsPlan | 0 | 1 | 4 |
isaac.navigation | MoveAndScan | 1 | 1 | 1 |
isaac.navigation | MoveUntilArrival | 2 | 0 | 3 |
isaac.navigation | NavigationMap | 0 | 0 | 4 |
isaac.navigation | NavigationMonitor | 1 | 1 | 5 |
isaac.navigation | PoseAsGoal | 0 | 1 | 4 |
isaac.navigation | PoseHeatmapGenerator | 1 | 1 | 4 |
isaac.navigation | RandomMapPoseSampler | 0 | 0 | 2 |
isaac.navigation | RandomWalk | 1 | 1 | 2 |
isaac.navigation | RangeScanModelClassic | 0 | 0 | 5 |
isaac.navigation | RangeScanModelFlatloc | 0 | 0 | 7 |
isaac.navigation | RangeScanRobotRemoval | 1 | 1 | 2 |
isaac.navigation | RangeScanToObservationMap | 1 | 2 | 6 |
isaac.navigation | RobotPoseGenerator | 0 | 0 | 4 |
isaac.navigation | RobotRemoteControl | 2 | 1 | 8 |
isaac.navigation | RobotViewer | 1 | 0 | 5 |
isaac.navigation | TemporaryObstacle | 0 | 0 | 3 |
isaac.navigation | TravellingSalesman | 0 | 1 | 5 |
isaac.navigation | VirtualGamepadBridge | 1 | 2 | 3 |
isaac.navigation | WaitUntilStationary | 1 | 0 | 2 |
isaac.navsim | ScenarioManager | 1 | 2 | 6 |
isaac.navsim | ScenarioMonitor | 4 | 1 | 9 |
isaac.object_pose_estimation | BoundingBoxEncoder | 1 | 1 | 1 |
isaac.object_pose_estimation | CodebookLookup | 1 | 2 | 2 |
isaac.object_pose_estimation | CodebookPoseSampler | 0 | 1 | 12 |
isaac.object_pose_estimation | CodebookWriter | 2 | 1 | 0 |
isaac.object_pose_estimation | DopeDecoder | 2 | 1 | 7 |
isaac.object_pose_estimation | GeneratePoseCnnDecoderData | 5 | 0 | 5 |
isaac.object_pose_estimation | ImagePoseEncoder | 3 | 1 | 0 |
isaac.object_pose_estimation | PoseCnnDecoder | 4 | 1 | 3 |
isaac.object_pose_estimation | PoseEncoder | 2 | 2 | 1 |
isaac.object_pose_estimation | PoseEstimation | 4 | 1 | 0 |
isaac.object_pose_estimation | PoseFiducialsPnP | 2 | 2 | 3 |
isaac.object_pose_refinement | PoseRefinement | 3 | 1 | 14 |
isaac.orb | ExtractAndVisualizeOrb | 1 | 2 | 5 |
isaac.otg5 | DualOtg5 | 2 | 1 | 17 |
isaac.otg5 | GoalToDelta | 1 | 1 | 2 |
isaac.otg5 | ObstacleDeltaLimiter | 1 | 1 | 7 |
isaac.otg5 | RotateTranslateRotateStateMachine | 2 | 1 | 5 |
isaac.path_planner | EndEffectorGlobalPlanner | 2 | 1 | 12 |
isaac.path_planner | GlobalPlanSmoother | 1 | 1 | 8 |
isaac.path_planner | GlobalPlanner | 2 | 1 | 16 |
isaac.path_planner | PlanCorrectionValidation | 1 | 1 | 3 |
isaac.path_planner | Pose2DirectedGraphLoader | 0 | 0 | 2 |
isaac.path_planner | Pose2DirectedGraphPlanner | 0 | 0 | 4 |
isaac.path_planner | Pose2GraphPlanner | 1 | 1 | 7 |
isaac.path_planner | Pose2GridGraphBuilder | 0 | 0 | 11 |
isaac.path_planner | Pose2WaypointFollower | 1 | 1 | 2 |
isaac.perception | BirdViewProjection | 3 | 2 | 1 |
isaac.perception | ColorSpaceConverter | 1 | 1 | 3 |
isaac.perception | CropAndDownsample | 2 | 2 | 4 |
isaac.perception | CropAndDownsampleCuda | 2 | 2 | 4 |
isaac.perception | FreespaceToFlatscan | 1 | 1 | 0 |
isaac.perception | ImageWarp | 2 | 2 | 5 |
isaac.perception | PointCloudAccumulator | 1 | 1 | 1 |
isaac.perception | RangeScanFlattening | 1 | 1 | 5 |
isaac.perception | RangeToPointCloud | 1 | 1 | 4 |
isaac.perception | ScanAccumulator | 1 | 1 | 3 |
isaac.perception | StereoRectification | 6 | 6 | 3 |
isaac.planner | DifferentialBaseModel | 0 | 0 | 2 |
isaac.planner | DifferentialBaseVelocityIntegrator | 1 | 1 | 1 |
isaac.planner | MultiJointPlanner | 2 | 1 | 2 |
isaac.planner | SphericalRobotShapeComponent | 0 | 0 | 2 |
isaac.planner_cost | AdditionBuilder | 0 | 0 | 1 |
isaac.planner_cost | BoundedQuadraticCostBuilder | 0 | 0 | 5 |
isaac.planner_cost | CirclesUnionSmoothDistanceBuilder | 0 | 0 | 2 |
isaac.planner_cost | DistanceQuadraticCostBuilder | 0 | 0 | 3 |
isaac.planner_cost | DotProductCostBuilder | 0 | 0 | 2 |
isaac.planner_cost | ObstacleDistanceBuilder | 0 | 0 | 2 |
isaac.planner_cost | PolygonDistanceQuadraticCostBuilder | 0 | 0 | 5 |
isaac.planner_cost | PolygonSpeedLimitBuilder | 0 | 0 | 9 |
isaac.planner_cost | PolylineDistanceQuadraticCostBuilder | 0 | 0 | 11 |
isaac.planner_cost | RangeConstraintsCostBuilder | 0 | 0 | 3 |
isaac.planner_cost | SmoothLinearTargetCostBuilder | 0 | 0 | 5 |
isaac.planner_cost | SmoothMinimumBuilder | 0 | 0 | 2 |
isaac.pwm | PwmController | 2 | 0 | 2 |
isaac.rgbd_processing | DepthEdges | 1 | 1 | 5 |
isaac.rgbd_processing | DepthImageFlattening | 2 | 1 | 12 |
isaac.rgbd_processing | DepthImageToPointCloud | 3 | 1 | 1 |
isaac.rgbd_processing | DepthNormals | 2 | 1 | 3 |
isaac.rgbd_processing | DepthPoints | 2 | 1 | 2 |
isaac.rgbd_processing | FreespaceFromDepth | 2 | 1 | 15 |
isaac.rl | DollyDockingAuxDecoder | 2 | 1 | 0 |
isaac.rl | DollyDockingBirth | 0 | 0 | 13 |
isaac.rl | DollyDockingDeath | 0 | 0 | 4 |
isaac.rl | DollyDockingReward | 0 | 0 | 6 |
isaac.rl | DollyDockingStateDecoder | 2 | 1 | 3 |
isaac.rl | DollyDockingStateNoiser | 0 | 0 | 1 |
isaac.rl | StateMachineGymFlow | 3 | 6 | 7 |
isaac.rl | TemporalBatching | 5 | 6 | 5 |
isaac.rl | TensorAggregator | 0 | 1 | 1 |
isaac.rl | TensorDeaggregator | 1 | 0 | 0 |
isaac.rl | TensorToCompositeVelocityProfile | 1 | 1 | 5 |
isaac.rmpflow | MultiJointRmpPlanner | 0 | 0 | 7 |
isaac.ros_bridge | CameraIntrinsicsToRos | 0 | 0 | 1 |
isaac.ros_bridge | FlatscanToRos | 0 | 0 | 1 |
isaac.ros_bridge | GoalToRos | 0 | 0 | 2 |
isaac.ros_bridge | GoalToRosAction | 2 | 1 | 5 |
isaac.ros_bridge | ImageToRos | 0 | 0 | 1 |
isaac.ros_bridge | OdometryToRos | 0 | 0 | 2 |
isaac.ros_bridge | PosesToRos | 0 | 0 | 2 |
isaac.ros_bridge | RosNode | 0 | 0 | 1 |
isaac.ros_bridge | RosToCameraIntrinsics | 0 | 0 | 0 |
isaac.ros_bridge | RosToDifferentialBaseCommand | 0 | 0 | 0 |
isaac.ros_bridge | RosToImage | 0 | 0 | 0 |
isaac.ros_bridge | RosToPoses | 0 | 0 | 2 |
isaac.safety | CarterSimBridge | 1 | 1 | 0 |
isaac.safety | CarterTeleop | 0 | 3 | 2 |
isaac.safety | EstopController | 2 | 2 | 2 |
isaac.safety | FaultMonitor | 1 | 2 | 1 |
isaac.safety | TestSink | 2 | 0 | 0 |
isaac.safety | TestSource | 0 | 2 | 2 |
isaac.scenario_engine | PolylineTransform | 0 | 0 | 7 |
isaac.scenario_engine | PoseTreeEdgeToRigidBodyGroup | 1 | 1 | 0 |
isaac.scenario_engine | SpawnActors | 0 | 1 | 5 |
isaac.sight | AliceSight | 0 | 0 | 0 |
isaac.sight | SightWidget | 0 | 0 | 10 |
isaac.sight | WebsightServer | 0 | 0 | 8 |
isaac.skeleton_pose_estimation | OpenPoseDecoder | 3 | 1 | 14 |
isaac.skeleton_pose_estimation | SkeletonsPnP | 2 | 2 | 3 |
isaac.superpixels | RgbdSuperpixelCostMap | 2 | 2 | 6 |
isaac.superpixels | RgbdSuperpixels | 5 | 2 | 18 |
isaac.superpixels | SuperpixelImageLabeling | 2 | 1 | 1 |
isaac.surflets | EstimateGroundPlane | 1 | 2 | 3 |
isaac.surflets | MinimumDistanceAssignment | 0 | 0 | 1 |
isaac.surflets | PointCloudClustering | 2 | 1 | 4 |
isaac.surflets | PointDistance | 0 | 0 | 0 |
isaac.surflets | PositionNormalDistance | 0 | 0 | 0 |
isaac.surflets | SurfletMasking | 3 | 1 | 1 |
isaac.trajectory_validation | ApplicabilityCheck | 0 | 0 | 8 |
isaac.trajectory_validation | CollisionCheck | 0 | 0 | 10 |
isaac.trajectory_validation | FeasibilityCheck | 0 | 0 | 5 |
isaac.trajectory_validation | RangeCheck | 0 | 0 | 3 |
isaac.trajectory_validation | SpeedCheck | 0 | 0 | 1 |
isaac.trajectory_validation | StateValidation | 3 | 1 | 2 |
isaac.trajectory_validation | TrajectorySelection | 4 | 1 | 1 |
isaac.trajectory_validation | TrajectoryValidation | 2 | 1 | 3 |
isaac.utils | CompositeToDifferentialTrajectoryConverter | 1 | 1 | 1 |
isaac.utils | DetectionUnprojection | 3 | 1 | 3 |
isaac.utils | Detections3Filter | 1 | 1 | 6 |
isaac.utils | DetectionsToPoseTree | 1 | 0 | 4 |
isaac.utils | DifferentialTrajectoryToPlanConverter | 1 | 1 | 1 |
isaac.utils | FlatscanToPointCloud | 1 | 1 | 0 |
isaac.utils | JoystickConfirmation | 1 | 0 | 2 |
isaac.utils | Plan2Converter | 1 | 1 | 1 |
isaac.utils | Pose2GaussianDistributionEstimation | 1 | 1 | 2 |
isaac.utils | PoseEvaluation | 0 | 1 | 8 |
isaac.utils | PoseMonitor | 0 | 1 | 2 |
isaac.utils | PoseTreeFeed | 0 | 1 | 0 |
isaac.utils | RigidBodiesToDetections | 1 | 1 | 1 |
isaac.utils | RigidBodyToOdometry | 1 | 1 | 5 |
isaac.utils | SegmentationCameraProtoSplitter | 1 | 4 | 0 |
isaac.utils | SendTextMessages | 0 | 1 | 2 |
isaac.utils | WaitUntilDetection | 1 | 0 | 1 |
isaac.utils.deprecated | ColorCameraProtoSplitter | 1 | 3 | 1 |
isaac.utils.deprecated | DepthCameraProtoSplitter | 1 | 3 | 1 |
isaac.velodyne_lidar | VelodyneLidar | 0 | 1 | 3 |
isaac.viewers | BinaryMapViewer | 2 | 0 | 2 |
isaac.viewers | DepthCameraViewer | 2 | 0 | 7 |
isaac.viewers | Detections3Viewer | 1 | 0 | 6 |
isaac.viewers | DetectionsViewer | 1 | 0 | 9 |
isaac.viewers | FiducialsViewer | 1 | 0 | 2 |
isaac.viewers | FlatscanViewer | 1 | 0 | 4 |
isaac.viewers | GoalViewer | 1 | 0 | 1 |
isaac.viewers | ImageKeypointViewer | 2 | 0 | 4 |
isaac.viewers | ImageViewer | 2 | 0 | 4 |
isaac.viewers | MosaicViewer | 0 | 0 | 4 |
isaac.viewers | ObjectViewer | 0 | 0 | 3 |
isaac.viewers | OccupancyMapViewer | 2 | 0 | 1 |
isaac.viewers | ParkingSpotListViewer | 1 | 0 | 5 |
isaac.viewers | Plan2Viewer | 1 | 0 | 1 |
isaac.viewers | PointCloudViewer | 1 | 0 | 4 |
isaac.viewers | Polyline2Viewer | 1 | 0 | 4 |
isaac.viewers | PoseTrailViewer | 0 | 0 | 4 |
isaac.viewers | SegmentationCameraViewer | 3 | 0 | 3 |
isaac.viewers | SegmentationViewer | 3 | 0 | 6 |
isaac.viewers | SkeletonViewer | 1 | 0 | 2 |
isaac.viewers | TensorViewer | 1 | 1 | 6 |
isaac.viewers | TrajectoryListViewer | 1 | 0 | 1 |
isaac.viewers.deprecated | ColorCameraViewer | 1 | 0 | 4 |
isaac.wakanda | WakandaServer | 2 | 0 | 4 |
isaac.ydlidar | YdLidar | 0 | 1 | 1 |
isaac.zed | ZedImuReader | 0 | 1 | 2 |
Components¶
isaac.AdafruitNeoPixelLedStrip¶
Description
This class interfaces with an Arduino Trinket which receives I2C signals dictating color patterns for the Adafruit NeoPixels strip. The Arduino Trinket is loaded with a program that interprets the I2C signals into appropriately-timed PWM signals for the light strip. See the NeoPixels documentation under “Hardware Reference Designs” for more detailed setup instructions.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- led_strip [LedStripProto]: The desired LED strip configuration message
- Outgoing messages
- (none)
Parameters
- bus [int] [default=1]: The I2C bus of the LED strip. Default value is 1
isaac.ArgusCsiCamera¶
Description
Interfaces the libargus library to support CSI camera. Only supported on L4T systems like the Jetson Nano.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- image [ImageProto]: Channel to broad cast image extracted from argus feed
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
Parameters
- mode [int32_t] [default=]: Resolution mode of the camera. Supported values are:
- 0: 2592 x 1944, 1: 2592 x 1458, 2: 1280 x 720
- camera_id [int32_t] [default=]: System device numeral for the camera. For example select 0 for /dev/video0.
- framerate [int32_t] [default=]: desired framerate
- focal_length [Vector2d] [default=]: Focal length of the camera in pixels
- optical_center [Vector2d] [default=]: Optical center in pixels
isaac.BlackFrameGenerator¶
Description
Publish either unmodified input_image during the transparent phase or a pure black image during the opaque phase. The initial phase is “transparent”. The codelet ticks when a new input_image is received.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ImageProto]: Input frame
Outgoing messages
- output_image [ImageProto]: Output frame that’s either equal to input frame or pure black
Parameters
- opaque_tick_duration [int] [default=2]: The oscillation period is the sum of opaque and transparent ticks Defines the number of ticks in the period to publish a pure black frame
- transparent_tick_duration [int] [default=1]: Defines the number of ticks in the period to publish the unmodified input image
isaac.ImageComparison¶
Description
Compare two images and report the correlation (similarity) between them
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image_a [ImageProto]: First input image
- input_image_b [ImageProto]: Second input image
- Outgoing messages
- (none)
Parameters
- correlation_threshold [float] [default=0.99]: The minimum correlation between two images where we will consider them the same
- down_scale_factor [int] [default=4]: Scaling of the displayed images in Sight
isaac.Joystick¶
Description
Publishes state for a joystick like an Xbox gamepad or other input device.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- js_state [JoystickStateProto]: The joystick message
Parameters
- deadzone [double] [default=0.05]: Size of the “deadzone” region, applied to both positive and negative values for each axis. For example, a deadzone of 0.05 will result in joystick readings in the range [-0.05, 0.05] being clamped to zero. Readings outside of this range are rescaled to fully cover [-1, 1]. In other words, the range [0.05, 1] is linearly mapped to [0, 1], and likewise for negative values.
- num_axes [int] [default=4]: Number of joystick axes (e.g., 4 axes might correspond to two 2-axis analogue sticks)
- num_buttons [int] [default=12]: Number of joystick buttons
- reconnect_interval [double] [default=1.0]: Reconnect interval, in seconds. This is the period between joystick connection attempts (i.e., attempts to open the joystick device file) when the initial attempt fails.
- input_timeout_interval [double] [default=0.1]: Input timeout interval, in seconds. This determines how long tick() will wait for input before giving up until tick() is called again. Note that stop() cannot succeed while tick() is waiting for input, so this timeout value should not be overly long.
- device [string] [default=”/dev/input/js0”]: Joystick device file (system-dependent)
- print_unsupported_buttons_warning [bool] [default=false]: Option controlling whether a warning will be logged when an event is received from an axis or button whose index exceeds num_axes or num_buttons, respectively
isaac.LandmarksViewer¶
Description
The LandmarksViewer keeps last visible landmarks and visualize it in 3d view as point cloud. Assuming last visible landmarks have bigger id.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- coordinates [TensorProto]: Coordinates for landmarks. A rank-1 tensor with dimensions (number of landmarks).
- ids [TensorProto]: Landmarks ids. A rank-1 int tensor with dimensions (number of landmarks).
- Outgoing messages
- (none)
Parameters
- max_number_landmarks [size_t] [default=10000]: Maximum number of landmarks to keep
- target_fps [double] [default=5.0]: Maximum framerate at which landmarks are displayed in sight.
isaac.LivoxLidar¶
Description
A driver for the Livox Mid-40 lidar. The driver opens and maintains the UDP sockets for communication with the lidar. The driver accumulates samples and publish them as a range point cloud. A missing lidar or incorrect configuration results in the component stopping. The lidar publishes 100,000 points per second in size-configurable batches. A drop in communication (e.g. loose cable, network interface change) will result in the component stopping.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- accumulated_point_cloud [PointCloudProto]: Output 3D point cloud samples. The point cloud is published when the point count reaches the configured minimum point count or when the time between message publishing is greater than the configured published interval.
Parameters
- device_ip [string] [default=”0.0.0.0”]: The IP address of the lidar device we want to connect to and receive data from. This parameter is changeable at configuration time.
- port_command [int] [default=50001]: The UDP port to send commands to the lidar. This parameter is changeable at configuration time.
- port_data [int] [default=50002]: The UDP port from which the data samples will be received. This parameter is changeable at configuration time.
- batch_count [int] [default=10]: Minimum number of accumulated point batches before publishing the point cloud. It can be configured and changed at runtime. The point cloud is published when the point count reaches the configured point batch count. Each batch is 100 data points per Livox communication protocol.
isaac.PanTiltDriver¶
Description
The PanTiltDriver class is a driver for a pan/tilt unit based on Dynamixel motors. As its name indicates, the unit is composed of two joints, the first one performing a rotation along the z axis (pan), the second performing a rotation along the y axis (tilt). Each joint have a name (for example ‘pan’ and ‘tilt’), and this driver will in addition to controlling the pan/tilt updates the PoseTree: the two transformations corresponding to each joint will be updated. The name of the edge in the PoseTree are: pan_in_T_pan_out and tilt_in_T_tilt_out where “pan” and “tilt” are the name of both joint. In order to make these transform useful you should add to the PoseTree (for example using a PoseInitializer codelet) the transformations from the robot to the pan_in and from the pan joint to the tilt joint (pan_out_T_tilt_in).
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [StateProto]: Current command for pan/tilt unit
Outgoing messages
- state [StateProto]: The state of the pan tilt unit
- motors [DynamixelMotorsProto]: State of Dynamixel motors
Parameters
- use_speed_control [bool] [default=true]: If set to true dynamixels are controlled in speed mode, otherwise they are controlled in position mode
- usb_port [string] [default=”/dev/ttyUSB0”]: USB port used to connect to the bus (A2D2 USB adapter)
- pan_servo_id [int] [default=1]: Dynamixel ID for pan servo
- tilt_servo_id [int] [default=2]: Dynamixel ID for tilt servo
- tilt_min [double] [default=-0.5]: Minimum value valid for tilt
- tilt_max [double] [default=2.0]: Maximum value valid for tilt
- pan_min [double] [default=-Pi<double>]: Minimum value valid for pan
- pan_max [double] [default=Pi<double>]: Maximum value valid for pan
- pan_offset [double] [default=5.83]: Constant offset in the pan angle such as pan = 0 has the end effector looking forward (X axis)
- tilt_offset [double] [default=4.73]: Constant offset in the tilt angle such as tilt = 0 has the end effector looking horizontally
- baudrate [int] [default=static_cast<int>(dynamixel::Baudrate::k1M)]: Baudrate of the Dynamixel bus. See packages/dynamixel/gems/registers.hpp for options. TODO Remove when refactored to use DynamixelDriver class
- model [int] [default=static_cast<int>(dynamixel::Model::XM430)]: What kind of dynamixel model it is: (AX12A = 0, XM430 = 1, MX12W = 2) TODO(jberling) refactor pan tilt to use DynamixelDriver class and switch to enum
- pan_joint_frame [string] [default=”pan”]: Name of the pan joint frame. The edge pan_in_T_pan_out will be added to the PoseTree.
- tilt_joint_frame [string] [default=”tilt”]: Name of the tilt joint frame. The edge tilt_in_T_tilt_out will be added to the PoseTree.
isaac.RealsenseCamera¶
Description
RealsenseCamera is an Isaac codelet for the Realsense D435 camera, which can provide color, depth, and infrared (IR) images.
You can change the resolution of the camera via various configuration parameters. However only certain modes are supported:
- 1280x720 (at most 30 Hz)
- 848x480
- 640x480
- 640x360
- 424x240
Valid framerate for the color image are 60, 30, 15, 6 FPS. Valid framerate for the depth image are 90, 60, 30, 15, 6 FPS. The camera can also produce images at a 1080p resolution. However, this is currently not supported as color and depth are set to the same resolution.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- left_ir [ImageProto]: The left IR camera image
- right_ir [ImageProto]: The right IR camera image
- color [ImageProto]: The color camera image, which can be of type Image3ub for color or Image1ui16 for grayscale.
- depth [ImageProto]: The depth image (in meters) in the left IR camera frame
- color_intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters for the color camera
- left_ir_intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters for the left IR camera
- right_ir_intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters for the right IR camera
- depth_intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole parameters for the depth camera
Parameters
- rows [int] [default=360]: The vertical resolution for both color and depth images.
- cols [int] [default=640]: The horizontal resolution for both color and depth images.
- ir_framerate [int] [default=30]: The framerate of the left and right IR sensors. Valid values are 90, 60, 30, 25, 15, 6. It should match the depth map framerate due to the RS 435 firmware constraints.
- color_framerate [int] [default=30]: The framerate of the RGB camera acquisition. Valid values are 60, 30, 15, 6.
- depth_framerate [int] [default=30]: The framerate of the depth map generation. Valid values are 90, 60, 30, 15, 6. It should match the IR framerate due to the RS 435 firmware constraints.
- align_to_color [bool] [default=true]: If enabled, the depth image is spatially aligned to the color image to provide matching color and depth values for every pixel. This is a CPU-intensive process and can reduce frame rates.
- frame_queue_size [int] [default=2]: Max number of frames you can hold at a given time. Increasing this number reduces frame drops but increase latency, and vice versa; ranges from 0 to 32.
- auto_exposure_priority [bool] [default=false]: Limit exposure time when auto-exposure is ON to preserve a constant FPS rate.
- laser_power [int] [default=150]: Amount of power used by the depth laser, in mW. Valid values are between 0 and 360, in increments of 30.
- enable_ir_stereo [bool] [default=false]: Enable acquisition and publication of the IR stereo pair. This setting can’t be changed at runtime.
- enable_color [bool] [default=true]: Enable acquisition and publication of the color frames. This setting can’t be changed at runtime.
- enable_depth [bool] [default=true]: Enable depth map computation and publication. This setting can’t be changed at runtime.
- enable_depth_laser [bool] [default=true]: Enable the depth laser projector to improve the depth image accuracy. Disabling it helps the visual odometry tracker by removing the dot pattern from the IR stereo pair. This setting can’t be changed at runtime.
- enable_auto_exposure [bool] [default=true]: Enable auto exposure. Disabling it can reduce motion blur
- dev_index [int] [default=0]: The index of the Realsense device in the list of devices detected. This indexing is dependent on the order the Realsense library detects the cameras, and may vary based on mounting order. By default the first camera device in the list is chosen. This camera choice can be overridden by the serial number parameter below.
- serial_number [string] [default=”“]: An alternative way to specify the desired device in a multicamera setup. The serial number of the Realsense camera can be found printed on the device. If specified, this parameter will take precedence over the dev_index parameter above.
isaac.SegwayRmpDriver¶
Description
A driver for the Segway RMP base.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segway_cmd [StateProto]: Linear and angular speed command for driving segway (navigation::DifferentialBaseControl type)
Outgoing messages
- segway_state [StateProto]: State of the segway consisting of linear and angular speeds and accelerations (DifferentialBaseDynamics)
Parameters
- ip [string] [default=”192.168.0.40”]: Isaac will use this IP to talk to segway
- port [int] [default=8080]: Isaac will use this port to talk to segway
- flip_orientation [bool] [default=true]: If true, segway’s forward direction will be flipped
- speed_limit_linear [double] [default=1.1]: Maximum linear speed segway is allowed to travel with
- speed_limit_angular [double] [default=1.0]: Maximum angular speed segway is allowed to rotate with
isaac.SerialBMI160¶
Description
SerialBMI160 is a driver that uses a serial connection to talk to the BMI160 Internal Measurement Unit (IMU).
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- imu [ImuProto]: IMU data including linear accelerations and angular velocities
Parameters
- device [string] [default=”/dev/ttyUSB0”]: Dev path where for the IMU device
isaac.SimpleLed¶
Description
Simple codelet to cycle through red, green, and blue colors to test LED display
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- led_strip [LedStripProto]: The outgoing LED strip message for the driver to display
- Parameters
- (none)
isaac.SlackBot¶
Description
A SlackBot to perform authentication and listen for incoming commands
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- slack_message [ChatMessageProto]: Messages to be sent to the slack server
Outgoing messages
- user_instruction [ChatMessageProto]: Messages received from the slack server
Parameters
- bot_token [string] [default=]: Slack bot token given on the slack app config page. A token can only be used by one Slackbot. Multiple robots on same token is not supported.
- slack_connect_url [string] [default=”https://slack.com/api/rtm.connect”]: Slack URL we will be sending the connection request too
isaac.StereoVisualOdometry¶
Description
This is a Stereo Visual Odometry codelet based on an implementation from nvidia. The input to the codelet is left and right grayscale image image pair with known intrinsics and extrinsics(relative transformation between cameras) The output of the codelet is a 6DOF pose of the left camera. The co-ordinate frame for this 6DOF pose is X front, Y left and Z up. Front here means the direction of the optical axis of the camera
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- left_image [ImageProto]: Images should be rectified prior to being passed in here. Gray input left image
- left_intrinsics [CameraIntrinsicsProto]: Left image intrinsics parameters
- right_image [ImageProto]: Gray input right image
- right_intrinsics [CameraIntrinsicsProto]: Right image intrinsics parameters
- imu [ImuProto]: IMU readings
Outgoing messages
- left_camera_pose [Pose3dProto]: The 6 DOF pose of the left camera. The pose is not published if the tracker is lost.
- coordinates [TensorProto]: Output keypoint coordinates
- features [TensorProto]: Output features ids. A rank-1 int tensor with dimensions (number of features).
- landmark_coordinates [TensorProto]: Output landmarks (triangulated features) coordinates
- landmark_ids [TensorProto]: Output landmark ids. A rank-1 int tensor with dimensions (number of landmarks).
Parameters
- denoise_input_images [bool] [default=false]: Enable image denoising. Disable if the input images have already passed through a denoising filter.
- horizontal_stereo_camera [bool] [default=true]: Enable fast and robust left-to-right tracking for rectified cameras with principal points on the horizontal line.
- process_imu_readings [bool] [default=true]: Enable IMU data acquisition and integration
- num_points [int] [default=100]: number of points to include in the pose trail debug visualization
- icon_scale [double] [default=1.0]: scale camera pyramid icon in visualization
- lhs_camera_frame [string] [default=]: Defines the name of the left camera frame. It is used to calculate the left_T_right camera extrinsics.
- rhs_camera_frame [string] [default=]: Defines the name of the right camera frame. It is used to calculate the left_T_right camera extrinsics.
- imu_frame [string] [default=]: Defines the name of the IMU camera frame used to calculate left_camera_T_imu The IMU to left camera transformation
- gravitational_force [Vector3d] [default=Vector3d(0.0, -9.80665, 0.0)]: The gravitational force vector
isaac.TorchInferenceTestDisplayOutput¶
Description
This component is specially designed to test TorchInference component and is specific to torch_inference app. This class displays the output received from the TorchInference component.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- test_output [TensorProto]: Receives tensor output from the Torch inference
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.TorchInferenceTestSendInput¶
Description
This component is specially designed to test TorchInference component and is specific to torch_inference app. This class sends input to the TorchInference component which loads given model and does inference with this input.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- test_input [TensorProto]: Sends tensors as input to Torch inference
Parameters
- input_value [float] [default=]:
isaac.V4L2Camera¶
Description
V4L2Camera is a camera driver implemented using V4L2. Currently this driver only accepts images from cameras in yuyv and automatically converts them to RGB.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- frame [ImageProto]: Each frame output by the camera
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
Parameters
- device_id [int32_t] [default=0]: Which camera should be opened
- rows [int32_t] [default=720]: Parameters of the image requested by the camera These must match exactly with what the camera is able to produce. Number of pixels in the height dimension
- cols [int32_t] [default=1280]: Number of pixels in the width dimension
- rate_hz [int32_t] [default=30]: Frames per second.
- hardware_image_queue_length [int32_t] [default=3]: Buffers are queued with the V4L2 driver so that the driver can write out images at the specified frame rate without delays. This may be changed by the camera when we are initializing.
- focal_length [Vector2d] [default=(Vector2d{700.0, 700.0})]: Focal length (in pixels) for the pinhole camera model
- optical_center [Vector2d] [default=(Vector2d{360.0, 640.0})]: Optical center of the projection for the pinhole camera model
- brightness [int32_t] [default=]: Adjustable camera parameters. v4l2-ctl can be used to check values, e.g., “v4l2-ctl –device=/dev/video0 –list-ctrls”. Descriptions below are taken from video4linux API documentation. Picture brightness, or more precisely, the black level
- contrast [int32_t] [default=]: Picture contrast or luma gain
- saturation [int32_t] [default=]: Picture color saturation or chroma gain
- gain [int32_t] [default=]: Gain control
- white_balance_temperature_auto [bool] [default=]: If true, white balance temperature will be automatically adjusted.
- white_balance_temperature [int32_t] [default=]: This control specifies the white balance settings as a color temperature in Kelvin. White balance temperature needs to be between 2000 abd 6500. This parameter is inactive if white_balance_temperature_auto is true
- exposure_auto [int32_t] [default=]: Exposure time and/or iris aperture. 0: Automatic exposure time, automatic iris aperture. 1: Manual exposure time, manual iris. 2: Manual exposure time, auto iris. 3: Auto exposure time, manual iris.
- exposure_absolute [int32_t] [default=]: Determines the exposure time of the camera sensor. The exposure time is limited by the frame interval. Drivers should interpret the values as 100 mus units, where the value 1 stands for 1/10000th of a second, 10000 for 1 second and 100000 for 10 seconds.
- use_cuda_color_conversion [bool] [default=true]: Whether to convert from yuyv to RGB using CUDA, otherwise the CPU is for the conversion.
isaac.Vicon¶
Description
This codelet publishes motion capture information from a Vicon Datastream. Use of this codelet requires a Vicon Datastream connected to camera equipment. It allows tracking of marker information and rigid body information relative to a world frame that can be user-defined during setup of the Vicon hardware.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- vicon_pose_tree [PoseTreeProto]: Pose tree message containing information from Vicon scene volume
- vicon_markers [MarkerListProto]: Marker list message containing all markers visible in Vicon scene volume
Parameters
- vicon_hostname [string] [default=”localhost”]: Hostname of the Vicon system
- vicon_port [string] [default=”801”]: Port to which the Vicon data is streaming
- reconnect_interval [double] [default=1.0]: Amount of time to wait before attempting to reconnect to the Vicon system
isaac.ZedCamera¶
Description
Provides stereo image pairs and calibration information from a ZED camera
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- left_camera_rgb [ImageProto]: Left color image
- right_camera_rgb [ImageProto]: Right color image
- left_camera_gray [ImageProto]: Left gray image
- right_camera_gray [ImageProto]: Right gray image
- extrinsics [Pose3dProto]: Camera pair extrinsics (right-to-left)
- left_intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters for the left camera
- right_intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters for the right camera
Parameters
- auto_exposure [bool] [default=true]: Automatic exposure control
- auto_white_balance [bool] [default=true]: Automatic white balance control
- brightness [int] [default=4]: Brightness level. Valid values are between 0 and 8.
- resolution [sl::RESOLUTION] [default=sl::RESOLUTION::VGA]: The resolution to use for the ZED camera. The following values can be set:
- RESOLUTION_HD2K: 2208x1242
- RESOLUTION_HD1080: 1920x1080
- RESOLUTION_HD720: 1280x720
- RESOLUTION_VGA: 672x376
- camera_fps [int] [default=60]: The image frame rate for the ZED camera. If set to 0, the highest FPS for the specified
resolution
will be used. The following are supported resolution/framerate combinations:- RESOLUTION_HD2K (2208*1242): 15 fps
- RESOLUTION_HD1080 (1920*1080): 15, 30 fps
- RESOLUTION_HD720 (1280*720): 15, 30, 60 fps
- RESOLUTION_VGA (672*376): 15, 30, 60, 100 fps
If the requestedcamera_fps
is unsupported, the closest available FPS will be used. ZED Camera FPS is not tied to a codelet tick rate as the camera has an independent on-board CPU.
- color_temperature [int] [default=2800]: The color temperature control. Valid values are between 2800 and 6500 with a step of 100.
- contrast [int] [default=4]: Contrast level. Valid values are between 0 and 8.
- exposure [int] [default=50]: Exposure control. Valid values are between 0 and 100. The parameter value is ignored if
auto_exposure
is enabled. The exposure time is interpolated linearly between 0.17072ms and the max time for a specific frame rate. The following are max times for common framerate:
- 15fps setExposure(100) -> 19.97ms
- 30fps setExposure(100) -> 19.97ms
- 60fps setExposure(100) -> 10.84072ms
- 100fps setExposure(100) -> 10.106624ms
- gain [int] [default=50]: Gain control. Valid values are between 0 and 100. The parameter value is ignored if
auto_exposure
is enabled. - gamma [int] [default=sl::VIDEO_SETTINGS_VALUE_AUTO]: The gamma control. Valid values are between 1 and 9. The special value -1 commands the camera ISP to auto-tune the parameter accordingly to the lighting conditions.
- hue [int] [default=0]: The hue control. Valid values are between 0 and 11.
- saturation [int] [default=4]: The saturation control. Valid values are between 0 and 8.
- sharpness [int] [default=4]: The digital sharpening control. Valid values are between 0 and 8.
- device_id [int] [default=0]: The numeral of the system video device of the ZED camera. For example for /dev/video0 choose 0.
- enable_imu [bool] [default=false]: Turns on capture and publication of IMU data that is only supported by ZED Mini camera hardware
- settings_folder_path [string] [default=”./”]: The folder path to the settings file (SN#####.conf) for the zed camera. This file contains the calibration parameters for the camera.
- gpu_id [int] [default=0]: The GPU device to be used for ZED CUDA operations
- gray_scale [bool] [default=false]: Turns on gray scale images
- rgb [bool] [default=true]: Turns on RGB color images
- enable_factory_rectification [bool] [default=true]: Turns on rectification of images inside ZED camera
- lhs_camera_frame [string] [default=”zed_left_camera”]: The left camera frame used to define the left_T_right camera transform in the PoseTree
- rhs_camera_frame [string] [default=”zed_right_camera”]: The right camera frame used to define the left_T_right camera transform in the PoseTree
isaac.atlas.Pose2Comparer¶
Description
Reports success when the two poses are within threshold. Given frame name parameters, Pose2Comparer looks up first_lhs_T_first_rhs and second_lhs_T_second_rhs from the Pose Tree. It then computes the delta pose using (first_lhs_T_first_rhs)^(-1) * second_lhs_T_second_rhs. Magnitude in position and angle of this delta pose are then compared against the threshold parameter. If the two poses are close enough, success is reported.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- first_lhs_frame [string] [default=]: Name of the left hand side frame of the first pose
- first_rhs_frame [string] [default=]: Name of the right hand side frame of the first pose
- second_lhs_frame [string] [default=]: Name of the left hand side frame of the second pose
- second_rhs_frame [string] [default=]: Name of the right hand side frame of the second pose
- threshold [Vector2d] [default=]: This codelet reports success if this parameter is set and the relative difference between the two poses is less than this threshold in position and angle.
isaac.atlas.PoseFromFile¶
Description
Reads pose from json file on disk and sets it to the PoseTree. This codelet can be used, for example, to read last known localization, which was written to disk by PoseFromFile codelet.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
- filename [string] [default=]: Path to the file that contains the pose json. Expected layout:
- {
- “rotation”: { “yaw_degrees”: 180.0 }, “translation”: [25.0, 20.0, 0.0]
} or [
0.0, 0.0, 0.0,
- -1.0,
- 25.0, 20.0, 0.0
] Please see “What is the syntax for setting a Pose3d in json?” in FAQ of documentation for more details.
isaac.atlas.PoseMessageInjector¶
Description
Receives pose information via messages and injects them into the pose tree.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- pose [PoseTreeEdgeProto]: Incoming pose messages to inject into the pose tree
- Outgoing messages
- (none)
Parameters
- frame_name_map [FrameNameMap] [default={}]: Map from frame names received to frame names to write. For example,
- “frame_name_map”: {
- “robot”: “robot_4”, “odom”: “odom_4”
} would result in poses received for world_T_robot robot_T_odom to be written as world_T_robot_4 robot_4_T_odom_4 respectively.
isaac.atlas.PoseToFile¶
Description
Reads pose from PoseTree and writes it as json to disk. This codelet can be used, for example, to save the last known localization. PoseFromFile codelet can then read the pose to continue localization from there.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
- filename [string] [default=]: Path for the file to create with pose as json
isaac.atlas.PoseToMessage¶
Description
Reads desired pose from pose tree and publishes pose information as a message
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- pose [PoseTreeEdgeProto]: Outgoing pose message from pose tree
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
isaac.atlas.PoseTreeRelink¶
Description
At start, changes the parent of rhs_frame from current_lhs_frame to desired_lhs_frame. Reports success if parent is changed and reports failure otherwise. This can be used, for example, after a robot lifts a cart that is detected in the world frame. After the lift, the cart would move with the robot, so the parent of the cart frame can be updated with this codelet from world to robot.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- current_lhs_frame [string] [default=]: Name of the current reference frame of the left side of the pose
- desired_lhs_frame [string] [default=]: Name of the desired reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
isaac.atlas.PoseTreeRemoveLink¶
Description
Codelet that deletes an edge in the PoseTree. The edege is delete in the start function, and it reports success if it successfully deleted the edge, otherwise it will return failure.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
isaac.atlas.PublishMapRoi¶
Description
Given a binary occupancy map of the environment, extract the relevant portion of the map surrounding the agent as an evidence grid map. The structure of the evidence grid map is defined by the lattice input. The acq time of the output message is the same as world_T_robot_gt message.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- map_lattice [LatticeProto]: Lattice defining the structure of the extracted evidence grid map
- world_T_robot_gt [PoseTreeEdgeProto]: Express the relative pose of the region of interest with respect to the map
Outgoing messages
- roi [ImageProto]: The region of interest extracted from the binary map
Parameters
- map_node_name [string] [default=”map”]: Name of the map node that contains the occupancy map
- robot_gt_frame [string] [default=”robot_gt”]: Name of the robot frame
isaac.behavior_tree.WaitUntilMessage¶
Description
Reports success upon receiving a message at the parameterized channel name. This behavior codelet can be used to create more sophisticated behavior trees such as: 1. Do not move the car until camera images are received, 2. Slow down the car upon detecting a parking spot.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- channel_name [string] [default=”message”]: Name of input channel
isaac.cask.Recorder¶
Description
Records data into a log file. This component can for example be used to write incoming messages to a log file. The messages could be replayed later using the Replay component.
In order to record a message channel setup an edge from the publishing component to the Recorder component. The source channel is the name of the channel under which the publishing component publishes the data. The target channel name on the Recorder component can be chosen freely. When data is replayed it will be published by the Replay component under that same channel name.
Warning: Please note that the log container format is not yet final and that breaking changes might occur in in the future.
The root directory used to log data is base_directory/exec_uuid/tag/… where both base_directory and tag are configuration parameters. exec_uuid is a UUID which changed for every execution of an app and is unique over all possible executions. If tag is the empty string the root log directory is just base_directory/exec_uuid/….
Multiple recorders can write to the same root log directory. In this case they share the same key-value database. However only one recorder is allowed per log series. This means if the same component/key channel is logged by two different recorders they can not write to the same log directory.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- base_directory [string] [default=”/tmp/isaac”]: The base directory used as part of the log directory (see class comment)
- batch_accumulation_duration [double] [default=0.1]: Maximum duration to wait before writing out messages, in seconds
- tag [string] [default=”“]: A tag used as part of the log directory (see class comment)
- use_compression [bool] [default=false]: If enabled compression is used when writing to the log file
- max_buffer_size_mb [int] [default=4096]: If messages can not be written out fast enough they are kept in memory. If more than a certain amount of memory is used no new messages are accepted until all messages are written out to the disk.
- max_bytes_written_per_tick_mb [double] [default=100]: Maximum total size of messages written per tick.
- min_bytes_per_tick_per_channel_mb [double] [default=0.2]: Minimum total size of messages written per tick and channel
- start_recording_automatically [bool] [default=true]: If true, the recording will automatically start when this Codelet starts.
isaac.cask.Replay¶
Description
Replays data from a log file which was recorded by a Recorder component. See the documentation for the Recorder component for more information.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- cask_directory [string] [default=”“]: The cask directory used to replay data from
- replay_time_offset [int64_t] [default=0]: Time offset to start a replay from between a log
- use_recorded_message_time [bool] [default=false]: Decides whether to use recorded message pubtime and acqtime or replay current time as pubtime and synchronize the acqtime using the starting time of the replay.
- loop [bool] [default=false]: If this is enabled replay will start from the beginning when it was replayed
- auto_stop_application [bool] [default=false]: If enabled automatically stops the application when the log was fully replayed
- report_success [bool] [default=false]: If enabled sets node status to: success the log was fully replayed, failure in case or errors
isaac.cask.ReplayBridge¶
Description
Communication Bridge between WebsightServer and Replay Node
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- request [nlohmann::json]: Request to replay node
Outgoing messages
- reply [nlohmann::json]: Reply from replay node
Parameters
- replay_component_name [string] [default=]: Replay component name in format node/component. Ex: replay/isaac.alice.Replay
- websight_replay_channel [string] [default=”websight/WebsightServer/replay”]: The full WebSight channel name to take record request from
- websight_replay_reply_channel [string] [default=”websight/WebsightServer/replay_reply”]: The full WebSight channel name to send record reply to
isaac.composite.CompositeAtlas¶
Description
A database which provides access to composite protos as waypoints. The waypoints are loaded from a cask file on a lazy-loading basics, and stored as CompositeWaypoint.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- cask [string] [default=]: Cask filename which contains composite protos.
isaac.composite.CompositeMetric¶
Description
Computes distance between two composite protos. The distance is determined by three factors: Schema: the schema used to parse values from the two composites. This schema must be a subset of
schemas in both protos
- Norm(s): for each quantity in the schema, the value is represented by a vector. The distance
- between vectors depends on the norm used, for example L1, L2, ect. Norm is specified via an double p, for p >=1 this represents the finite p-norm, while infinite norm is represented by p = -1.0.
- Weight(s): the distance of each quantity is summed with weights to produce the total distance for
- the two protos
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- use_config_schema [bool] [default=false]: If true, the schema from the config below is used to compute distance. Otherwise, caller should set the schema
- schema [Schema] [default=]: The schema to compute distance for. See json_formatter.hpp for json representation of Schema. This is use only when use_config_schema is to True.
- norms [VectorXd] [default=VectorXd::Constant(1, 2.0)]: The list of p-norms to compute distance for each quantity in schema. The size should match the list of quantities in schema, or one in which case it’s applied to all quantities.
- weights [VectorXd] [default=VectorXd::Constant(1, 1.0)]: Optional param to multiply with entity distance to get the total distance.
isaac.composite.CompositePublisher¶
Description
Publishes a composite proto containing a batch of waypoints specify on a path. The waypoints are read from a CompositeAtlas database.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- path [CompositeProto]: Publishes all the waypoints in path as a batch. A new message is published only when the path changes.
Parameters
- atlas [string] [default=]: Name of the node containing the CompositeAtlas component.
- path [std::vector<std::string>] [default={}]: The list of waypoint names as the path to follow. Reports failure if any waypoint on the path does not exist in CompositeAtlas. A new message is sent if this config changes.
- use_config_schema [bool] [default=false]: If true, use the schema set in config below. Otherwise use schema of the first waypoint in the path read from cask as the schema for the whole path, and ignore the schema set in config.
- schema [Schema] [default=]: Sets the schema to publish. This must be a subset of schemas in the path waypoints. Only used if use_waypoint_schema is set to false.
- report_success [bool] [default=false]: If true, report success after publishing a valid path.
isaac.composite.FollowPath¶
Description
Receives a sequence of waypoints via a message and publishes the waypoint one by one. The next waypoint is published when the current state is within tolerance of the current waypoint. When a new path is received, the current waypoint is reset to the first waypoint in the path. This codelet requires a CompositeMetric component in the same node to specify how to compute distance between the waypoint and the received state message.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- path [CompositeProto]: Receives the path to follow given as a batch of waypoints
- state [CompositeProto]: Receives current state, used to check if we arrive at the current waypoint.
Outgoing messages
- goal [CompositeProto]: Publishes the desired goal waypoint.
Parameters
- wait_time [double] [default=1.0]: Seconds to wait after arriving at the current waypoint before publishing the next waypoint.
- loop [bool] [default=false]: If set to true we will repeat the path once completed. Otherwise will report success on completion.
- tolerance [double] [default=0.1]: Tolerance for arrival. A waypoint is reached when the distance between state and current waypoint computed by the CompositeMetric component is below this limit.
isaac.controller.AckermannControl¶
Description
Receives a plan for an ackermann model robot, and outputs a control with linear acceleration and tire angle by interpolating along the trajectory. If use_pid_controller is set to true, a pid controller will be used to better track the target command.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [CompositeProto]: Receives a plan that contains a trajectory of the vehicle. It contains positions as well as the derivative of the vehicle and the angle of the tires.
- odometry [Odometry2Proto]: Receives the estimated current speed, used to stop vehicle when no plan is received
Outgoing messages
- command [CompositeProto]: Publishes the control to be applied in order to follow the trajectory
Parameters
- command_delay [double] [default=0.01]: A look-ahead time (in seconds) to get the current command from the received plan. The expected position or speed at time = current_time + command_delay from the received plan is published as the command
- force_stop [bool] [default=false]: Parameter to force the robot to stop.
- stop_time [double] [default=0.05]: Time used to stop the vehicle if there is not available plan.
- maximum_acceleration [double] [default=2.0]: Maximum acceleration used when stopping the vehicle
- use_pid_controller [bool] [default=false]: Whether or not we should use a pid controller to track the trajectory.
- speed_proportional_gain [double] [default=2.0]: Gain of the proportional error of the speed pid controller.
- speed_differential_gain [double] [default=0.5]: Gain of the differential error of the speed pid controller.
- speed_integral_gain [double] [default=0.25]: Gain of the integral error of the speed pid controller.
- maximum_speed_integral [double] [default=1.0]: Maximum value allowed for the integral error of the speed pid controller.
- lateral_proportional_gain [double] [default=1.0]: Gain of the proportional error of the lateral pid controller.
- lateral_differential_gain [double] [default=0.15]: Gain of the differential error of the lateral pid controller.
- lateral_integral_gain [double] [default=0.25]: Gain of the integral error of the lateral pid controller.
- maximum_lateral_integral [double] [default=1.0]: Maximum value allowed for the integral error of the lateral pid controller.
isaac.controller.DifferentialBaseControl¶
Description
Controller node for a differential base. Takes a trajectory plan and output a segway command.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [DifferentialTrajectoryPlanProto]: Input: the plan to follow (contain a list of state at a given timestamp)
Outgoing messages
- cmd [StateProto]: Output a navigation::DifferentialBaseControl state message.
Parameters
- cmd_delay [double] [default=0.2]: Expected delay between the command sent and the execution (in second)
- use_pid_controller [bool] [default=true]: Whether or not use the pid controller
- manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
- pid_heading [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 1.0, -1.0})]: Parameters of pid controller that controls the heading error
- pid_pos_y [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the lateral error
- pid_pos_x [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the forward error
- controller_epsilon_gain [double] [default=1.0]: Gains used to compute the forward gain
- controller_b_gain [double] [default=1.0]: Gains used to compute the heading gain
- robot_frame [string] [default=”robot”]: Name of robot’s frame. Output command message will be published in this frame.
- static_frame [string] [default=”odom”]: Name of a static frame. Used to convert input plan message to robot frame.
- use_tick_time [bool] [default=true]: This flag controls whether or not this task uses tick time or the acquisition time to know which command to output. Note: Acquisition time should be used when the DifferentialTrajectoryPlanProto comes with a not synchronized source. cmd_delay should be used to estimate the full delay from when the odometry was computed to when the command is going to be executed on the system.
isaac.controller.HolonomicBaseControl¶
Description
Controller node for a differential base. Takes a trajectory plan and output a segway command.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [DifferentialTrajectoryPlanProto]: Plan (position/time) the controller is trying to follow. TODO: Should not take a DifferentialTrajectoryPlanProto
Outgoing messages
- cmd [StateProto]: Output a navigation::DifferentialBaseControl state message.
Parameters
- cmd_delay [double] [default=0.2]: Expected delay between the command sent and the execution (in second)
- use_pid_controller [bool] [default=true]: Whether or not use the pid controller
- manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
- pid_heading [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 1.0, -1.0})]: Parameters of pid controller that controls the heading error
- pid_pos_y [Vector7d] [default=Vector7d((double[]){1.0, 0.1, 0.0, 0.25, -0.25, 2.0, -2.0})]: Parameters of pid controller that controls the lateral error
- pid_pos_x [Vector7d] [default=Vector7d((double[]){0.2, 0.05, 0.0, 0.1, -0.1, 2.0, -2.0})]: Parameters of pid controller that controls the forward error
- use_tick_time [bool] [default=true]: This flag controls whether or not this task uses tick time or the acquisition time to know which command to output. Note: Acquisition time should be used when the DifferentialTrajectoryPlanProto comes with a not synchronized source. cmd_delay should be used to estimate the full delay from when the odometry was computed to when the command is going to be executed on the system.
isaac.controller.MultiJointController¶
Description
Interpolates received plan for joints in a kinematic tree object to obtain command at a given look-ahead time and publishes the command. Supports sending either joint position or speed command.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [CompositeProto]: Receives a plan that contains a trajectory of joint positions or/and speeds. If control_mode is position, the trajectory must contain joint positions. If control_mode is speed, the trajectory must contain joint speeds
- state [CompositeProto]: Receives current state of the joints. In position control, this is sent back to the robot in case of emergency stop, so that the robot doesn’t move
Outgoing messages
- command [CompositeProto]: Publishes the current joint position or speed command
Parameters
- kinematic_tree [string] [default=]: Node name containing the map:KinematicTree component. This is used to get the names of active joints to parser/serialize composite proto.
- control_mode [ControlMode] [default=ControlMode::kPosition]: Use “position” or “speed” to specify whether to send joint position and speed command
- command_delay [double] [default=0.05]: A look-ahead time (in seconds) to get the current command from the received plan. The expected position or speed at time = current_time + command_delay from the received plan is published as the command
isaac.deepstream.Pipeline¶
Description
The Deepstream pipeline codelet leverages NVIDIA DeepStream SDK as a media tool. DeepStream and the open source GStreamer library provide a convenient and versatile method for acquiring, decoding, processing, and publishing multimedia in many different format. The communication channels of the codelet vary in numbers, types, and names. They are determined by the pipeline configuration. The codelet only ticks on received messages to be passed onto the media pipeline.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- pipeline [string] [default=”videotestsrc ! video/x-raw]: The Deepstream/GStreamer media pipelines. Note the usage of an appsink element for the entry points to Isaac. Equivalently, the appsrc for exit. The name of the element becomes the channel name. For example: appsrc name=<RX CHANNEL NAME>. For pipeline syntax, please read the command manual for gst-launch-1.0 or read this page: https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html For supported capabilities, formats, memory models, and equivalent Isaac messages, please refer to the component detailed documentation.
isaac.detect_net.DetectNetDecoder¶
Description
The DetectNetDecoder converts tensors containing object detection values from the output of a DetectNetv2 network to a Detections2Proto type. DetectNetv2 is a ResNet-based model for multi-class object detection. For more information about DetectNetv2, see https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#training_gridbox. This codelet extracts all bounding boxes from the tensors, thresholds them on their confidence values, and filters out overlapping bounding boxes via non-maximum suppression.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bounding_boxes_tensor [TensorProto]: Tensor from DetectNetv2 inference. Contains bounding boxes per class per grid box and is of size (4*N, R, C), where N = number of classes, R = grid rows, C = grid cols.
- confidence_tensor [TensorProto]: Tensor from DetectNetv2 inference. Contains confidence values per object per grid box and is of size (N, R, C), where N = number of classes, R = grid rows, C = grid cols.
Outgoing messages
- detections [Detections2Proto]: Output detections with bounding box, label, and confidence
Parameters
- non_maximum_suppression_threshold [double] [default=0.6]: Non-maximum suppression threshold. The greater this value is, the stricter the algorithm is when determining if two bounding boxes are detecting the same object.
- confidence_threshold [double] [default=0.6]: Confidence threshold of the detection. Decreasing this value allows less confident detections be considered.
- min_bbox_area [int] [default=100]: Bounding box area threshold of the detection. Decreasing this value allows for smaller bounding box detections
- labels [std::vector<std::string>] [default=]: Names of the classes trained by the network. The order and length of this list must correspond to the order and length of the labels given during training.
- output_scale [Vector2d] [default=]: Output scale in [rows, cols] for the decoded bounding boxes output. For example, this could be the image resolution before downscaling to fit the network input tensor resolution.
- bounding_box_scale [double] [default=35.0]: Bounding box normalization for both X and Y dimensions. This value is set in the DetectNetv2 training specification.
- bounding_box_offset [double] [default=0.5]: Bounding box offset for both X and Y dimensions. This value is set in the DetectNetv2 training specification.
isaac.dummy.DummyPose2dConsumer¶
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- pose [Pose2dProto]:
- Outgoing messages
- (none)
Parameters
- threshold [int] [default=5]:
isaac.dummy.DummyPose2dProducer¶
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- pose [Pose2dProto]:
- Parameters
- (none)
isaac.dummy.DummyStatus¶
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- status [int] [default=0]:
isaac.dynamixel.DynamixelDriver¶
Description
A motor driver for a Daisy chain of Dynamixel motors. This codelet receives desired motor speed commands via a message and publishes the current motor speeds as a state proto. Multiple checks and safe guards are used to protected the driver from misuse. The codelet also currently features a small debug mode in which individual motors can be tested with constant speed.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [StateProto]: The desired angular speeds for each motor
Outgoing messages
- state [StateProto]: The measured angular speeds for each motor
Parameters
- port [string] [default=”/dev/ttyUSB0”]: USB port where Dynamixel controller is located at. usb_port varies depending on the controller device, e.g., “/dev/ttyACM0” or “/dev/ttyUSB0”
- baudrate [Baudrate] [default=Baudrate::k1M]: Baud rate of the Dynamixel bus. This is the rate of information transfer.
- servo_model [Model] [default=Model::MX12W]: Model of servo (AX12A, XM430, MX12W, XC430)
- control_mode [DynamixelMode] [default=DynamixelMode::kVelocity]: If set to true dynamixels are controlled in speed mode, otherwise they are controlled in position mode
- servo_ids [std::vector<int>] [default=]: Unique identifier for Dynamixel servos. Each motor needs to be assigned a unique ID using the software provided by Dynamixel. This is a mandatory parameter.
- torque_limit [double] [default=1.0]: Servo maximum torque limit. Caps the amount of torque the servo will apply. 0.0 is no torque, 1.0 is max available torque
- max_speed [double] [default=6.0]: Maximum (absolute) angular speed for wheels
- command_timeout [double] [default=0.3]: Commands received that are older than command_timeout seconds will be ignored. Kaya will stop if no message is received for command_timeout seconds.
- debug_mode [bool] [default=false]: Enables debug mode in which all motors are driving with constant speed independent from incoming messages.
- debug_speed [double] [default=1.0]: If debug mode is enabled, all motors will rotate with this speed.
isaac.egm_fusion.EvidenceGridMapViewer¶
Description
Visualizes a three channel evidence grid map as an RGB image where the first channel (visualized as white) denotes the belief mass for the cell being free and the second channel (colored as black) represents the belief of a cell being occupied. The last channel contains mass representing the uncertainty in the cell (represented by green as default). The structure of the egm is defined by the lattice associated with it.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- egm_lattice [LatticeProto]: Lattice information for the evidence grid map to be visualized
- egm [ImageProto]: Evidence grid map to be visualized
- Outgoing messages
- (none)
Parameters
- uncertainity_visualization_color [Pixel3ub] [default=Pixel3ub(0, 133, 100)]: RGB color for the uncertain region in an evidence grid. By default, uncertain regions are visualized with dark green color
isaac.egm_fusion.EvidenceMapFusion¶
Description
Creates and maintains a dynamic grid-based state space of the environment as an evidence map.
The output dynamic evidence map is always relative to the base frame with the center of the base frame at a fixed location in the map. The previous state of the evidence map is continuously fused into the current using the odometry transformation. Good odometry is critical to maintaining a sharp, high-quality evidence map.
Internally a much larger fused map is maintained and has the orientation of the world frame. This is done to avoid unnecessary diffusion effects from coordinate transformations. New observation measurements are fused into the larger map and then the local map is extracted from it and transformed back to its frame relative to the base.
The number of evidence grid map sources can be set by the parameter ‘num_observation_channels’. For a given number, the channel names are set as sensor_1, sensor_2, sensor_3, etc. The prefix for the channels can be configured through the ‘observation_channel_prefix’ parameter. For every observation channel, a corresponding lattice channel is created with the same channel index. For example, the channel ‘lattice_1’ corresponds to the lattice for the evidence grid map created by sensor_1.
- There are a couple of relevant coordinate frames:
- base_frame : The coordinate frame relative to which the fused map frame and local map
- frame is computed (for example, the center of the robot)
- odom : The pose of the frame is continuously estimates with respect to this
- pose.
- localmap : The “grid” coordinate frame of the local map starting in the top left corner
- of the local map. This is with respect to the base frame.
fused_map_grid : The “grid” coordinate frame of the fused map. fused_map_center: The center of the fused map. The fused map is centered at the current
position of the base frame but has a constant orientation. Orientation will however drift over time as odometry is not perfect.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- local_map_lattice [LatticeProto]: The lattice grid information associated to the output local evidence map. It contains the information about the cell size as well as the reference frame of the local egm.
- fused_map_lattice [LatticeProto]: The lattice grid information associated to the larger fused evidence map. It contains the information about the cell size as well as the reference frame of the fused evidence map.
Outgoing messages
- evidential_local_map [ImageProto]: The latest evidence map representing the environment.
Parameters
- fusion_operator [FusionOperator] [default=FusionOperator::kPCR6]: Fusion rule to fuse evidence masses
- num_observation_channels [size_t] [default=1]: Number of channels connected to this codelet in the application graph
- odom_frame [string] [default=”odom”]: The name of the coordinate frame used to continuously update the fused map.
- base_frame [string] [default=”robot”]: The name of the coordinate frame from which local map and fused map frames are computed
- observation_channel_prefix [string] [default=”sensor_”]: Prefix for the channels that will accept evidence grid maps from the sensors Indexing for the lattice channels begin from 1
- lattice_channel_prefix [string] [default=”lattice_”]: Prefix for the channels that will accept lattices for the grid maps from the sensors Indexing for the lattice channels begin from 1
- use_gpu [bool] [default=true]: If cuda enabled kernels need to be used for fusion and extraction of local map Currently only PCR6 fusion operation is accelerated with CUDA
isaac.egm_fusion.EvidenceMapInpaint¶
Description
Paints a set of rectangles in an evidence grid map with the paint_mass parameter.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- egm_lattice [LatticeProto]: An evidence grid map lattice information
- original_egm [ImageProto]: An evidence grid map which needs to be painted
Outgoing messages
- painted_egm [ImageProto]: A painted evidence grid map
Parameters
- rectangles_frame [string] [default=]: The coordinate frame of the rectangular region to paint
- paint_mass [Vector3f] [default=Vector3f(1.0f, 0.0f, 0.0f)]: The mass values that will be used to paint every cell within the rectangular region
- rectangles [std::vector<geometry::RectangleD>] [default=]: Set of rectangular regions to paint in the evidence grid, each one with the format [[x_min,x_max], [y_min,y_max]] in meters
isaac.egm_fusion.EvidenceToBinaryMap¶
Description
Converts an 2 proposition evidence map (where each cell contains the mass of free, occupied and uncertain propositions) into a binary map based on thresholds. A cell is declared free if the difference between the free and occupied masses exceeds the free threshold. Similarly, a cell is declared occupied if the difference between occupied and free masses exceeds the occupied threshold. For the remaining cells, if “is_optimistic” is true, then they are treated as free else they are treated as occupied.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- evidence_grid [ImageProto]: Incoming evidence map which will be converted to binary map
Outgoing messages
- binary_map [ImageProto]: Computed binary map (Image1ub)
Parameters
- free_threshold [float] [default=0.1]: Minimum difference between the free and occupied mass of a cell for it to be labelled free
- occupied_threshold [float] [default=0.1]: Minimum difference between the occupied and free mass of a cell for it to be labelled occupied
- is_optimistic [bool] [default=false]: If enabled, uncertain cells will be treated as free, otherwise they are considered to be blocked.
isaac.egm_fusion.RangeScanToEvidenceMap¶
Description
Compute an evidence map from a flatscan. An evidence map is represented as an Image3f where the first chanel corresponds to the belief mass of a cell being free, while the second channel corresponds to the belief mass of a cell being occupied and the third channel corresponds to the uncertainty mass
The map is computed relative to the sensor position, the axis are in the same direction as the sensor frame, and the translation is controlled by the dimensions and map_offset_relative parameters.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: The observation_map is created based on flat range scans.
- evidence_map_lattice [LatticeProto]: The lattice information of the Evidence Map
Outgoing messages
- evidence_map [ImageProto]: Image3f containing the free, occupied and uncertain masses
Parameters
- wall_thickness [double] [default=0.20]: When integrating a flatscan an area of the given thickness behind a hit is marked as solid. This value should be at least in the order of the chosen cell size.
- sensor_frame [string] [default=”lidar”]: The name of the reference frame in which range scans arriving on the flatscan channel are defined.
isaac.evaluators.AprilTagsEvaluator¶
Description
AprilTagsEvaluator is a codelet used to assess the accuracy and precision of the fiducial detections made by the AprilTagsDetection codelet. It is meant to be used in conjunction with the driver codelet of a camera sensor, which should publish ImageProto structs, to sanity test whether that camera provides images that are compatible with the computer vision based algorithms in ISAAC SDK. As such, this codelet is only meant to be ran as part of the sensor certification framework which will launch it alongside the driver of the sensor under test and the AprilTagsDetection codelet. The verification process consists of first ensuring that fiducial detections are able to be made from the ImageProto structs published by the camera driver, after which a sequence of fiducial detections are checked to be within a pre-specified ROI which is currently set to be a centered area 3/4 of the height and width of the image resolution.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color [ImageProto]: The camera image which should be of type Image3ub and in color. This is used only to save a sample PNG image to the sensor certification test logs when running this test.
- fiducials [FiducialListProto]: The list of april tag fiducials the test will verify are valid, meaning the fiducials have at least 4 points (corners), and that based on those points the fiducials lie within an ROI.
- Outgoing messages
- (none)
Parameters
- roi_ratio [double] [default=0.75]: The ratio between the dimensions of the ROI to the resolution of the image provided by the camera under test.
- test_timeout [int] [default=]: The amount of time (seconds) the test is set to be ran for.
- num_frames [int] [default=]: The number of FiducialListProto messages to be captured.
isaac.evaluators.ColorEvaluator¶
Description
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_listener [ImageProto]: Color images to validate
Outgoing messages
- cv_image [ImageProto]: CV Masked image
Parameters
- frame_rate [double] [default=]: The frame rate of the color channel
- image_height [int] [default=]: Image dimensions
- image_width [int] [default=]:
- target_color [int] [default=0]: Expected lego color of target area: 0-red, 1-green, 2-blue
- target_width [int] [default=10]: Width of target area in pixels
- target_height [int] [default=10]: Height of target area in pixels
- target_cmd [int] [default=0]: Direction to move target: 0-none, 1-up, 2-down, 3-left, 4-right, 5-done
isaac.evaluators.ColorSanityEvaluator¶
Description
Receives color camera messages and verifies the dimensions, colorspace, and focal length match the expected values
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_listener [ImageProto]: Color images to validate
- color_intrinsics_listener [CameraIntrinsicsProto]:
- Outgoing messages
- (none)
Parameters
- rows [int] [default=]: Expected number of rows
- cols [int] [default=]: Expected number of columns
- channels [int] [default=]: Expected number of channels
- focal_x [float] [default=]: Expected focal length
- focal_y [float] [default=]:
- num_frames [int] [default=]: Number of frames to test
- colorspace [string] [default=]: Colorspace
isaac.evaluators.DepthEvaluator¶
Description
Captures several depth frames and examines the mean and standard deviation of the depth within a target region. Verifies the mean and standard deviation are within the given tolerances
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth_listener [ImageProto]: Depth messages to evaluate
- color_listener [ImageProto]: After setup is complete, the first color message received on this channel is saved as a png to record the scene setup
- Outgoing messages
- (none)
Parameters
- frame_rate [double] [default=]: The frame rate of the depth channel, used to determine how many frames to evaluate
- image_height [int] [default=]: Image dimensions
- image_width [int] [default=]:
- target_depth [double] [default=0.5]: Target depth value in meters
- target_width [int] [default=10]: Width of target area in pixels
- target_height [int] [default=10]: Height of target area in pixels
- mean_tolerance [double] [default=1.0]: Tolerance for average depth (% error)
- cov_tolerance [double] [default=0.05]: Tolerance for coefficient of variation (standard deviation / mean)
- target_cmd [int] [default=0]: Direction to move target: 0-none, 1-up, 2-down, 3-left, 4-right, 5-done
isaac.evaluators.DepthSanityEvaluator¶
Description
Receives depth camera messages and verifies the dimensions, focal length match the expected values
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth_listener [ImageProto]: Depth images to validate
- depth_intrinsics_listener [CameraIntrinsicsProto]:
- Outgoing messages
- (none)
Parameters
- rows [int] [default=]: Expected number of rows
- cols [int] [default=]: Expected number of columns
- focal_x [float] [default=]: Expected focal length
- focal_y [float] [default=]:
- num_frames [int] [default=]: Number of frames to test
isaac.evaluators.FrameRateEvaluator¶
Description
Captures several depth or color images and verifies that the framerate is within the given tolerances
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_listener [ColorCameraProto]: Color channel to listen to for framerate
- depth_listener [DepthCameraProto]: Depth channel to listen to for framerate
- Outgoing messages
- (none)
Parameters
- target_fps [double] [default=30.0]: Target framerate
- fps_tolerance [double] [default=1.0]: Tolerance for framerate (% error)
- test_depth [bool] [default=false]: Whether to test depth frame rate, otherwise, test the color framerate
- test_duration [double] [default=5.0]: Length of time (seconds) to run framerate test
isaac.fiducials.AprilTags3Detection¶
Description
AprilTags3Detection takes an image as input and detects and decodes any APrilTags found in the image. It returns an array of Tag IDs as output. The AprilTags3 codelet uses AprilTag 3 reference implementation from AprilRobotics. This implementation is known to be slower than NvAprilTag CUDA-optimized implementation. It is provided as a reference implementation only, without any performance optimizations.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ImageProto]: RGB input image. Image should be undistorted prior to being passed in here.
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
Outgoing messages
- april_tags [FiducialListProto]: Output, List of AprilTag fiducials
Parameters
- tag_dimensions [double] [default=0.18]: Tag dimensions, translation of tags will be calculated in same unit of measure
- tag_family [string] [default=”tag36h11”]: Tag family, currently ONLY tag36h11 is supported
isaac.fiducials.AprilTagsDetection¶
Description
AprilTagsDetection takes an image as input and detects and decodes any APrilTags found in the image. It returns an array of Tag IDs as output.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ImageProto]: RGB input image. Image should be undistorted prior to being passed in here.
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
Outgoing messages
- april_tags [FiducialListProto]: Output, List of AprilTag fiducials
Parameters
- max_tags [int] [default=50]: Maximum number of AprilTags that can be detected
- tag_dimensions [double] [default=0.18]: Tag dimensions, translation of tags will be calculated in same unit of measure
- tag_family [string] [default=”tag36h11”]: Tag family, currently ONLY tag36h11 is supported
isaac.fiducials.FiducialAsGoal¶
Description
Looks for a fiducial with a specific ID and uses it as a goal for the navigation stack. The goal can be computed relative to the fiducial based on different methods.
- “center”: The center of the fiducial is projected into the Z=0 plane and published as the goal point for the navigation stack.
- “pointing”: A ray is shot out of the center of the fiducial into the direction of the normal and intersected with the Z=0 ground plane. This happens up to a maximum distance of max_goal_tag_distance.
- “offset”: The fixed offset fiducial_T_goal is used to compute the goal based on the detected fiducial.
A goal or plan is published every time a fiducial detection is received. In case the fiducial is not found for longer than give_up_duration a stop command is sent.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- fiducials [FiducialListProto]: The input channel where fiducial detections are publishes
Outgoing messages
- goal [Goal2Proto]: The target fiducial as a goal
- plan [Plan2Proto]: The target fiducial as a simple plan with one waypoint
Parameters
- target_fiducial_id [string] [default=”tag36h11_9”]: The ID of the target fiducial
- give_up_duration [double] [default=1.0]: If the robot does not see the fiducial for this time period the robot is stopped
- mode [Mode] [default=Mode::kCenter]: Specifies how the robot will use the fiducial to compute its goal location.
- max_goal_tag_distance [double] [default=1.0]: The maximum distance the goal with be away from the tag
- robot_frame [string] [default=]: The name of the robot coordinate frame
- camera_frame [string] [default=]: The name of the camera coordinate frame
isaac.flatscan_localization.GradientDescentLocalization¶
Description
A flatscan localization method using a gradient descent algorithm.
This codelet uses a flatscan to localize the robot in a known map. As this is a local optimization technique an initial guess is necessary. The computed pose of the scanner and thus the robot are written to the pose tree.
This method is quite stable compared to the more noisy particle-filter based approach. However it is a uni-modal technique which can not deal well with ambiguity.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Range scan used to localize the robot
- Outgoing messages
- (none)
Parameters
- map [string] [default=”map”]: Reference frame in which the algorithm tracks the target.
- flatscan_frame [string] [default=”lidar”]: Flat range scans arriving on the flatscan channel are relative to this coordinate frame.
- robot_init_frame [string] [default=”robot_init_global_localizer”]: In the start function the algorithm is initialized with the pose of this coordinate frame.
- robot_frame [string] [default=”robot”]: The frame which is tracked by this algorithm.
isaac.flatscan_localization.GridSearchLocalizer¶
Description
An exhaustive grid search localizer.
Based on a flat range scan every possible pose in a map is checked for the likelihood that the scan was taken at that pose. The pose with the best match is written to the pose tree as a result.
This node uses a simplified and customized range scan model to increase the performance of the algorithm. The algorithm currently only works for a 360 degree range scan with constant angular resolution.
This component uses a GPU-accelerated algorithm. Depending on the map size and the GPU the runtime of the algorithm might range from less than a second to multiple seconds.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- exclude_restricted_areas [bool] [default=true]: If false, robot may localize inside a restricted area defined in the map configuration
- robot_radius [double] [default=0.25]: The radius of the robot. This parameter is used to exclude poses which are too close to an obstacle.
- max_beam_error [double] [default=0.50]: The maximum beam error used when comparing range scans.
- num_beams_gpu [int] [default=256]: The GPU accelerated scan-and-match function can only handle a certain number of beams per range scan. The allowed values are {32, 64, 128, 256, 512}. If the number of beams in the range scan does not match this number a subset of beams will be taken.
- batch_size [int] [default=512]: This is the number of scans to collect into a batch for the GPU kernel. Choose a value which matches your GPU well.
- sample_distance [float] [default=0.1]: Distance between sample points in meters. The smaller this number, the more sample poses will be considered. This leads to a higher accuracy and lower performance.
- map [string] [default=”map”]: Name of map node to use for localization
- flatscan_frames [std::vector<std::string>] [default={“lidar”}]: The names of the reference frames for the flatscan sources in which range scans arriving on the flatscan channels are defined. The order of the frame names corresponds to the numbering of the expected incoming flatscan channels flatscan_<n>. While n counts from 1, the channel flatscan_1 is just referred to as flatscan.
- flatscan_channel_names [std::vector<std::string>] [default={“flatscan”}]: The names of the input channels used to receive flatscan messages. Each flatscan channel is associated to the frame name in flatscan_frames at the same position. Both lists need to have the same amount of entries. Channel names may not be duplicated in this list.
- output_frame [string] [default=”robot_init_global_localizer”]: The estimated pose of the robot will be written to PoseTree as world_T_output_frame
- patch_size [Vector2i] [default=Vector2i(512, 512)]: The size in gridmap rows/columns each patch used for tracing should contain.
- robot_frame [string] [default=”robot”]: Name of the frame of the robot
isaac.flatscan_localization.MonitorEvaluatorFrequency¶
Description
Calculates with which frequency flatscan messages are received. Sends a normalized value to any connected Monitor instance based on its threshold parameter and an expected frequency.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming flatscan messages to calculate a message frequency from
Outgoing messages
- grade [CompositeProto]: Outgoing grade to send to a Monitor instance
Parameters
- threshold [double] [default=5.0]: Measure defining 1.0 deviation of the actual from the expected frequency, in Hz. This means that for each threshold the actual frequency differs from the expected_frequency, the resulting evaluator grade increases (frequency smaller than expected) or decreases (frequency higher than expected) by 1.0. Fractions apply.
- expected_frequency [double] [default=10.0]: The expected frequency with which flatscan messages are received, in Hz
isaac.flatscan_localization.MonitorEvaluatorMapscan¶
Description
Evaluates measured flatscans against predicted flatscans generated from the occupancy gridmap used for localization and the current estimated robot pose. Sends a normalized quality grade to any connected Monitor instance based on the result.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming flatscan messages used to monitor the current robot pose
Outgoing messages
- grade [CompositeProto]: Outgoing grade to send to a Monitor instance
Parameters
- map [string] [default=”map”]: Name of the map node which contains the reference map
- range_scan_model [string] [default=”shared_robot_model”]: Name of the node which contains the RangeScanModel component used to compare flatscans
- flatscan_frame [string] [default=”lidar”]: Name of the coordinate frame of the sensor which produces the flatscan messages
- robot_frame [string] [default=”robot”]: Name of the coordinate frame of the robot base
- beam_distance_threshold [double] [default=0.2]: Beams where measured distance and expected distance are within this tolerance are considered “good” beams.
- good_beams_threshold [double] [default=0.75]: Measure defining what percentage of good beams is a grade deviation of 1.0
isaac.flatscan_localization.MonitorEvaluatorSvio¶
Description
Compares the relative movement of two coordinate transformations in different pose trees. This codelet targets coordinate frames that have different means of acquiring the estimate, specifically flatscan based pose estimation and Stereo Visual Inertial Odometry. It sends a grade value representing their agreement to connected Monitor instances.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- grade [CompositeProto]: Outgoing grade to send to a Monitor instance
Parameters
- ref1_frame [string] [default=]: Name of the reference frame of the first input pose
- pose1_frame [string] [default=]: Name of the target frame of the first input pose
- ref2_frame [string] [default=]: Name of the reference frame of the second input pose
- pose2_frame [string] [default=]: Name of the target frame of the second input pose
- delta_time [double] [default=]: Optional delta time that defines the time interval between poses to subtract; the codelet’s start time is used if this parameter is not set instead
- threshold [Vector2d] [default=]: Maximum relative difference between the two poses tolerated, in position (meters) and angle (radians)
isaac.flatscan_localization.ParticleFilterLocalization¶
Description
Localizes the robot in a given map based on a flat range scan.
A Baysian filter based on a particle filter is used to keep track of a multi-modal hypothesis distribution. For every tick the particle distribution is updated based on an ego motion estimate read from the pose tree. Particles are then evaluated against the measured range scan using a range scan model to compute new particle scores. Particles with the highest score are combined in a weighted averaged to compute the new best estimate of the robot pose. The robot pose is written into the pose tree as a result.
Range scans are compared using a range scan model. In order for this node to work properly a component which is derived from RangeScanModel needs to be created and referenced in the parameter.
Particles are initialized in the start function using an initial estimate of the robot pose which is read from the pose tree. The GridSearchLocalizer component can for example be used to provide this initial estimate. Alternatively the initial pose could also be provided using a PoseInitializer component.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming range scan used to localize the robot
- flatscan_2 [FlatscanProto]: A second range scan which can be used to localize the robot.
Outgoing messages
- samples [Pose2Samples]: The current weight samples the particle is tracking
Parameters
- num_particles [int] [default=75]: The number of particles used in the particle filter
- absolute_predict_sigma [Vector3d] [default=Vector3d(0.04, 0.04, DegToRad(5.0))]: Standard deviation of Gaussian noise “added” to the estimated pose during the predict
step of the particle filter. This value is a rate per second and will be scaled by the time step. The used equation is of the form:
current_position += Gaussian(0, sqrt(dt) * sigma);Note the use of sqrt(dt) for scaling the standard deviation which is required when summing up Normal distributions. The vector contains three parameters:
- noise along the forward direction (X axis)
- noise along the sidewards direction (Y axis)
- noise for the rotation
- relative_predict_sigma [Vector3d] [default=Vector3d(0.10, 0.10, 0.10)]: Standard deviation of Gaussian noise which is applied relative to the current speed of
- the robot and scaled by the timestep. The used equation is of the form:
- current_position += Gaussian(0, sqrt(dt) * current_speed * sigma);
The vector contains three parameters as explained in absolute_predict_sigma.
- initial_sigma [Vector3d] [default=Vector3d(0.3, 0.3, DegToRad(20.0))]: Standard deviation of Gaussian noise which is applied to the initial pose estimate when the particle filter is (re-)seeded.
- output_best_percentile [double] [default=0.10]: The final pose estimate is computed using the average of the best particles. For example a value of 0.10 would mean that the top 10% of particles with highest scores are used to compute the final estimate.
- reseed_particles [bool] [default=false]: Set to true to request reseeding particles. This will be reset to false when the particle filter was reseeded.
- map [string] [default=”map”]: Node of the map which contains map data. The map is used to compute which range scan would be expected from a hypothetical robot pose.
- range_scan_model [string] [default=”shared_robot_model”]: Name of the node which contains a component of type RangeScanModel which is then used to compare range scans when evaluating particles against a new incoming message.
- flatscan_frame [string] [default=”lidar”]: Flat range scans arriving on the flatscan channel are relative to this coordinate frame.
- flatscan_2_frame [string] [default=”lidar_2”]: Similar to flatscan_frame but for the second flatscan channel flatscan_2.
- robot_init_frame [string] [default=”robot_init_global_localizer”]: In the start function particles are initialized around this coordinate frame.
- robot_frame [string] [default=”robot”]: The frame which is tracked by the particle filter.
isaac.flatscan_localization.ParticleSwarmLocalization¶
Description
An adaptive localization algorithm using a swarm of particles.
A particle swarm algorithm is used to localize the robot based on a single flat range scan. The pose with the best match is written to the pose tree as a result.
Consider using GridSearchLocalizer instead as it might provide a better particles to prescission ratio.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: The current sensor measurement based on which we try to localize in the map
- Outgoing messages
- (none)
Parameters
- num_particles [int] [default=1000]: The number of particles used by PSO
- pso_omega [double] [default=0.5]: Omega parameter of PSO
- pso_phi [Vector3d] [default=(Vector3d{0.05, 0.05, 0.1})]: Phi parameter of PSO (values are for for dx, dy, da)
- pso_phi_p_to_g [double] [default=1.0]: PSO parameter to express ratio between phi_p and phi_g
- map [string] [default=”map”]: Map node to use for localization
- output_frame [string] [default=”robot_init_global_localizer”]: The estimated pose of the robot will be written to PoseTree as world_T_output_frame
isaac.flatsim.DifferentialBasePhysics¶
Description
Runs a very basic physics simulation which moves a differential based by following commands quite literally.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- command [CompositeProto]: Actuator commands for the wheels of a differential base
Outgoing messages
- bodies [CompositeProto]: Resulting physics state of the differential base body
Parameters
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- joint_name_left_wheel [string] [default=”left_wheel”]: Name of the joint for left wheel
- joint_name_right_wheel [string] [default=”right_wheel”]: Name of the joint for right wheel
- wheel_acceleration_noise [double] [default=0.03]: Each step a random normal-distributed noise with the given sigma will be added to the desired wheel acceleration. The sigma will be scaled based on the time step and wheel speed.
- wheel_acceleration_noise_decay [double] [default=0.995]: The wheel acceleration noise is additive simulating a random walk. To keep the noise bounded around zero it is multiplied with a decay factor at every timestep.
- slippage_magnitude_range [Vector2d] [default=Vector2d(0.00, 0.05)]: A random friction value is applied which effectively reduces the effect of wheel speed on wheel distance driven. A friction value of 0 zero means full transmission, while a friction value of 1 means full slippage. Slippage is computed randomly using a uniform distribution with the given minimum and maximum value.
- slippage_duration_range [Vector2d] [default=Vector2d(0.50, 1.25)]: The slippage value is maintained constant for a certain duration and then changed to a new value. The duration of the slippage is computed using a uniform distribution with given minimum and maximum value.
- robot_init_pose_name [string] [default=”robot_init_gt”]: Name of the pose in pose tree to use as the initial pose for robot
isaac.flatsim.DifferentialBaseSimulator¶
Description
Simulates a differential base by translating base commands into actuator commands, and by publishing base state computed based on rigid body state from the simulator
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- diff_base_command [StateProto]: Input command message with desired body speed (should be of type: DifferentialBaseControl)
- physics_bodies [CompositeProto]: Input state of the base rigid body as computed by physics
Outgoing messages
- physics_actuation [CompositeProto]: Output actuator message with desired accelerations for each wheel
- diff_base_state [StateProto]: Output state of differential base (DifferentialBaseDynamics)
Parameters
- max_wheel_acceleration [double] [default=10.0]: The maximum acceleration for a wheel
- power [double] [default=0.20]: How fast the base will accelerate towards the desired speed
- flip_left_wheel [bool] [default=false]: If this is enabled the direction of the left wheel will be flipped
- flip_right_wheel [bool] [default=false]: If this is enabled the direction of the right wheel will be flipped
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- joint_name_left_wheel [string] [default=”left_wheel”]: Name of the joint for left wheel
- joint_name_right_wheel [string] [default=”right_wheel”]: Name of the joint for right wheel
isaac.flatsim.FlatscanNoiser¶
Description
Adds noise to a flatscan
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Recive a FlatScan proto: this is a list of beams (angle + distance)
Outgoing messages
- noisy_flatscan [FlatscanProto]: Output a noisy FlatScan proto: this is a list of beams (angle + distance) with
Parameters
- min_range [double] [default=0.25]: The minimum range at which obstacles are detected
- max_range [double] [default=50.0]: The maximum range of the simulated LIDAR
- range_sigma_rel [double] [default=0.001]: Standard deviation of relative range error
- range_sigma_abs [double] [default=0.03]: Standard deviation of absolute range error
- beam_invalid_probability [double] [default=0.05]: Probability that a beam will be simulated as invalid
- beam_random_probability [double] [default=0.00001]: Probability that a beam will return a random range
- beam_short_probability [double] [default=0.03]: Probability that a beam will return a smaller range
isaac.flatsim.SimRangeScan¶
Description
Simulates a 2D range scan
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- flatscan [FlatscanProto]: Output a FlatScan proto: this is a list of beams (angle + distance)
Parameters
- num_beams [int] [default=360]: The number of beams in the range scan
- min_range [double] [default=0.25]: The minimum range at which obstacles are detected
- max_range [double] [default=50.0]: The maximum range of the simulated LIDAR
- min_angle [double] [default=0.0]: The min angle of simulated beams
- max_angle [double] [default=TwoPi<double>]: The max angle of simulated beams
- map [string] [default=”map”]: Map node to use for tracing range scans
- lidar_frame [string] [default=”lidar_gt”]: Name of the frame of the simulated LiDAR sensor
isaac.flatsim.TraceOccupancyMap¶
Description
Given an occupancy map containing information about the surroundings of the agent, simulate a flatscan (or raycast). The rays are projected from the center of the sensor in the map to all the corners of the occupancy map.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- occupancy_map [ImageProto]: Incoming occupancy grid map that needs to be raycasted
- occupancy_map_lattice [LatticeProto]: Structure of the occupancy grid map
Outgoing messages
- flatscan [FlatscanProto]: Output a FlatScan proto with a list of beams (angle + distance)
Parameters
- num_beams [int] [default=900]: The number of beams in the range scan
- min_range [double] [default=0.00]: The minimum range at which obstacles are detected
- max_range [double] [default=1000.0]: The maximum range of the simulated LIDAR
- min_angle [double] [default=0.0]: The min angle of simulated beams
- max_angle [double] [default=TwoPi<double>]: The max angle of simulated beams
- lidar_frame [string] [default=”lidar”]: Name of the frame of the simulated LiDAR sensor
isaac.fuzzy.EfllFuzzyEngineExample¶
Description
A sample class demonstrating how to do fuzzy inference with EFLL
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.fuzzy.LfllFuzzyEngineExample¶
Description
A sample class demonstrating how to do fuzzy inference with LFLL
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.gtc_china.PanTiltGoto¶
Description
Publishes the target pan and tilt angles according to user-defined parameters. It also receives the current pan and tilt angles from the pan-tilt unit as feedback, and uses it to monitor if the target angles have been reached. Once the target angles are reached, it reports success.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- current_state [StateProto]: Input containing the current pan-tilt state
Outgoing messages
- target_state [StateProto]: Output state proto containing target pan and tilt angle information
Parameters
- target_pan_angle [double] [default=0.0]: Target pan angle for the camera
- target_tilt_angle [double] [default=0.0]: Target tilt angle for the camera
- tolerance [double] [default=0.05]: Parameter which defines at least how close the current pan and tilt angles have to be to the target angles to be considered as having reached the target. If the absolute differences between the current and target pan and tilt angles are lesser than or equal to this value, we consider that the pan-tilt unit has reached the required angles.
isaac.hgmm.HgmmPointCloudMatching¶
Description
Calculates ego pose with HGMM (Hierachical Gaussian Mixture Model) from input point cloud. https://research.nvidia.com/publication/2018-09_HGMM-Registration
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- cloud [PointCloudProto]: Takes as input point clouds from sensor like Lidar or Depth Camera
- Outgoing messages
- (none)
Parameters
- levels [int] [default=2]: The number of levels to build the HGMM tree. This number depends on the complexity of the scene geometry and the number of points in the point clouds. Typically, 2 works well for simple scenes/point clouds, though 3 empirically works better for denser point clouds like velodyne-32 or more. The higher the level, the more accurate the registration, but divergence becomes more probable. (the model overfits and becomes unstable.) 4+ typically reserved for high-fidelity 3d reconstructions, not 6 dof registration
- convergence_threshold [float] [default=0.001]: The lower, the longer the algorithm takes to converge, but performance becomes better. 0.01: fast to converge but worse accuracy 0.001-0.0001: slow to converge but often better accuracy
- max_iterations [int] [default=30]: Max iterations regardless of convergence. Most problems take on the order of 10-35 iterations per level for normal convergence tolerance ranges.
- noise_floor [float] [default=0.000]: TODO Noise parameter (currently turned off). Used if data contains extreme outliers. In the meantime, basic filtering of input needs to be performed outside of HGMM model creation and registration
- zero_x_y_minimal_z [float] [default=1]: Minimal z-coordinate for valid points with zero x-y coordinates. Used to drop invalid points from erroneous source
- regularization [float] [default=0.01]: Regularization to prevent singularities and overfitting If solution is diverging, parameter is too low. 0.0001: highly accurate but often unstable 0.001: highly accurate but possible divergence 0.01: robust convergence but higher error 0.1: very robust but possibly biased result
- axis_length [double] [default=1.0]: Ego frame axis length
- skip [int] [default=51]: Skipping points to reduce overload of visualization
- history_size [int] [default=10]: Keeps past several history point clouds for visualization
- max_distance [double] [default=10.0]: Visualizes no points beyond the distance
isaac.imu.IioBmi160¶
Description
Interface driver for an inertial measurement unit (IMU) BMI160 IIO device. Sets up the IMU device (accel + gyro), and publishes IMU data. Initialization failures will stop the node.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- imu_raw [ImuProto]: ImuProto is used to publish IMU data read from the buffer
Parameters
- i2c_device_id [int] [default=1]: I2C device ID: matches ID of /dev/i2c-X
- imu_T_imu [SO3d] [default=SO3d::FromAxisAngle(Vector3d{1, 0, 0}, Pi<double>)]: IMU Mounting Pose In the base case, the IMU is mounted on it’s back. Rotate 180 degrees about X-axis (flip Y and Z axes)
isaac.imu.ImuCalibration2D¶
Description
Codelet to perform Imu Calibration Provides access to the imu calibration library Creates (or updates) calibration file
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- imu [ImuProto]: Imu Data
- Outgoing messages
- (none)
Parameters
- imu_calibration_file [string] [default=”imu_calibration.out.json”]: path to output calibration file. This file will be created if it does not exist and overwritten if it exists.
- imu_variance_stationary [double] [default=0.2]: Threshold for stationary variance
- imu_window_length [int] [default=100]: Number of samples in window
isaac.imu.ImuCorrector¶
Description
Receives raw IMU data and removes biases either by using calibration data that is supplied or calibrating itself in the beginning.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- raw [ImuProto]: Receive raw IMU data
Outgoing messages
- corrected [ImuProto]: Publish corrected IMU data
Parameters
- calibration_file [string] [default=]: Optional calibration file. If a calibration file is provided, biases from the file will be removed from the IMU data. Otherwise we will calibrate in the beginning.
- calibration_variance_stationary [double] [default=0.1]: Stationary variance for calibration
- calibration_window_length [int] [default=100]: Number of samples in window for calibration
isaac.imu.ImuSim¶
Description
This codelet manages a single IMU sensor in the simulator User can provide biases, noises and optional calibration file as parameters
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bodies [RigidBody3GroupProto]: Input states of the rigid bodies as computed by physics engine
Outgoing messages
- imu_raw [ImuProto]: Imu proto is used to publish raw Imu data received from simulator
Parameters
- imu_name [string] [default=”imu”]: Name of the IMU rigid body This param is required and should match the config file for the sim
- gravity_norm [double] [default=9.80665]: Imu specific parameters Norm of local gravitational constant
- sampling_rate [double] [default=30.0]: Sampling Frequency
- accel_bias [Vector3d] [default=Vector3d::Zero()]: Accelerometer Bias
- accel_noise [Vector3d] [default=Vector3d::Zero()]: Accelerometer (zero mean) noise std dev
- gyro_bias [Vector3d] [default=Vector3d::Zero()]: Gyroscope Bias
- gyro_noise [Vector3d] [default=Vector3d::Zero()]: Gyroscope (zero mean) noise std dev
isaac.json.JsonMockup¶
Description
Reads a JSON from user-provided parameter and periodically publishes it as a JsonProto and optionally a raw Json message
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- json [JsonProto]: Outputs a JsonProto message containing the json_mock parameter serialized to text.
- raw_json [nlohmann::json]: Outputs a copy of the json_mock parameter in a RawMessage. Use when interacting with components that do not accept capnp messages, or use with a JsonToProto component to generate capnp messages of any type.
Parameters
- json_mock [json] [default=]: The JSON to publish
- report_success [bool] [default=false]: Reports success after publishing the message, if enabled
- num_successful_publishes [int] [default=1]: The number of times the message is published before the codelet reports success, if enabled.
- raw_type [uint64_t] [default=]: The type id to set for the raw json message. If this parameter is not set, raw json messages are not published
isaac.json.JsonReplay¶
Description
Loads JSON messages from file and publishes one proto message per tick
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- json_proto [JsonProto]: JSON message read from the file
Parameters
- jsonfile_path [string] [default=”/tmp/input.jsonl”]: Path to JSON file
isaac.json.JsonTcpClient¶
Description
The JsonTcpClient class is a codelet which connects to a remote TCP server to send and receive Json serialized capnp messages. For each message that is sent over the TCP connection, a corresponding Json serialized MessageHeaderProto message is also sent to encode metadata about the message (channel, acqtime, uuid, etc.). When the codelet receives a Json message from the TCP server, it will deserialize that message and publish it to the channel specified in the MessageHeaderProto metadata. When the codelet receives a message from the application on any channel, it will serialize the message as well as a MessageHeaderProto message with the relevant metadata to a Json object and send it to the TCP server over the TCP connection.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- host [string] [default=]: The IP address or hostname of the remote host from which messages will be received.
- port [int] [default=]: The TCP port number on which the remote host is publishing messages.
- reconnect_interval [double] [default=0.5]: If a connection to the remote can not be established or breaks we try to re-establish the connection at this interval (in seconds).
- message_check_interval [double] [default=0.5]: The maximum amount of time to block on the socket before checking for new messages
- default_channel [string] [default=”json”]: If a message is received from the remote server with no channel specified, the codelet will publish the message on this channel
isaac.json.JsonToProto¶
Description
Converts JSON messages into proto messages.
JSON messages must be published on the channel “json”. Note that the input channel does not appear in the normal list of channels due to how this codelet works internally.
Type ID must be set correctly otherwise conversion will fail.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- json [nlohmann::json]: JSON messages received on this channel are converted to proto messages.
Outgoing messages
- proto [MessageHeaderProto]: Publish proto messages in registered proto definition as specified by incoming json message proto id. FIXME(yangl) This is the wrong proto type.
- Parameters
- (none)
isaac.json.JsonWriter¶
Description
This codelet writes all Jsons that are received through JsonProto messages and/or raw Json messages to a file. The output file is in the JSON Lines format, which is a newline-delimited JSON that is used for JSON streaming. For more information, see http://jsonlines.org/
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- json [JsonProto]: Incoming JSON message to write that is received as a proto
- raw_json [Json]: Incoming JSON message to write that is received as a raw message
- Outgoing messages
- (none)
Parameters
- filename [string] [default=]: Path to write file. If the file already exists, it will be overwritten by this codelet.
- indent [int] [default=-1]: Sets the indent of nlohmann::basic_json::dump(). Leave as -1 for the compact representation with no newlines. Set to positive for newlines with indent level.
- itemize_top_level_array [bool] [default=false]: If enabled each item in the top-level array will be written as an individual line.
isaac.json.ProtoToJson¶
Description
Converts Proto messages into Json messages. Accepts all registered proto messages to channel “proto”. Require valid type id from messages.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- proto [MessageHeaderProto]: Receives any proto message FIXME(yangl) This is the wrong proto type.
Outgoing messages
- json [nlohmann::json]: Publishes converted Json message
- Parameters
- (none)
isaac.kaya.KayaBaseDriver¶
Description
A driver for Kaya base with three wheels in triangular layout. The driver receives holonomic control commands and translates them to wheel motor commands which it outputs. On the other side it receives wheel-based state, translates them into holonomic base state and publishes them.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- base_command [StateProto]: The desired motion of Kaya of type messages::HolonomicBaseControls
- wheel_state [StateProto]: The measured angular speeds for each wheel received from the motor driver. The order of wheels is the same as for the wheel_command channel.
Outgoing messages
- base_state [StateProto]: The state of Kaya of type messages::HolonomicBaseDynamics
- wheel_command [StateProto]: The desired angular speeds for each wheel to be sent to the motor driver. The order of wheels in the message is: front right, front left, back
Parameters
- wheel_base_length [double] [default=0.125]: Distance of the wheel center to the robot center of rotation
- wheel_radius [double] [default=0.04]: The radius of Kaya wheels
- max_linear_speed [double] [default=0.3]: Maximum allowed linear speed
- max_angular_speed [double] [default=0.5]: Maximum allowed angular speed
isaac.kinova_jaco.KinovaJaco¶
Description
A class to receive command and publish state information for the Kinova Jaco arm.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- arm_command [CompositeProto]: Command for the arm, parsed based on control mode. In joint control mode, requires a scalar quantity for each of the seven joints with entity matching the kinematic tree link names, and measure matching the control mode (position or speed). In end effector control, the entity name matching the end_effector_name set in config. In end effector pose control, expects a position quantity of translation vector(Vector3) and a rotation quantity of quaternion (Vector4), in base coordinate. In end effector velocity control, expects a speed quantity of linear velocity (Vector3) in base coordinate and a angularSpeed quantity of angular speed (Vector3) in end-effector coordinate.
- finger_command [CompositeProto]: Receives command to open or close fingers
Outgoing messages
- arm_state [CompositeProto]: Current state for the arm, includes end effector pose, joint positions and velocities. The schema includes the following: end effector position (Vector3) and rotation quaternion (Vector4), in base coordinate; a scalar quantity for position and for speed for each of the seven joints, entity name is the link name in kinematic tree;
- finger_state [CompositeProto]: Current position of finger
Parameters
- kinova_jaco_sdk_path [string] [default=]: Path to Kinova Jaco SDK for Gen7 7 DoF arm on Ubuntu. Driver is tested for use with Kinova Jaco SDK 1.4.2 and SDK 1.5.1. See SDK resources for details of the Kinova Jaco driver.
- initialize_home [bool] [default=false]: If true, initializes arm and finger to home position in start.
- control_mode [ControlMode] [default=ControlMode::kJointPosition]: Set control mode for arm. This can be changed at runtime
- kinematic_tree [string] [default=]: Name of the node containing the map:KinematicTree component
- end_effector_name [string] [default=”end_effector”]: Name of the end effector for parsing/creating composite message.
- finger_names [std::vector<std::string>] [default=std::vector<std::string>({“finger1”, “finger2”, “finger3”})]: Name of fingers used in composite proto
- finger_use_raw_value [bool] [default=false]: If true, use raw (un-normalized) values received in command and for publishing state for the fingers. For kinova 3-finger hand, these are typically in the range of [0, 7340] for finger position, and [-6000, 6000] for speed. If false, use 0.0/1.0 for open/close fingers, and the raw position state is mapped to binary using the finger_close_threshold param below.
- finger_close_threshold [float] [default=3000.0f]: Threshold for the raw finger position reported by kinova api to map to binary open/close state. Values below this threshold are considered open, above closed. This is only used if param finger_use_raw_value is false.
isaac.laikago.LaikagoDriver¶
Description
A driver for Laikago, a quadruped robot designed by Unitree Roboitcs. The driver receives holonomic control commands and translates them to laikago sdk high level velocity command which it outputs. On the other side it receives laikago states (base velocity), translates them into holonomic base state and publishes them.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- base_command [StateProto]: The desired motion of Laikago of type messages::HolonomicBaseControls
Outgoing messages
- base_state [StateProto]: The state of Laikago of type messages::HolonomicBaseDynamics
- imu [ImuProto]: The imu proto has linear and angular acceleration data
Parameters
- speed_limit_linear [double] [default=0.6]: Maximum linear speed robot is allowed to travel with
- speed_limit_angular [double] [default=0.8]: Maximum angular speed robot is allowed to rotate with
- min_command_speed [double] [default=0.01]: Minimum command speed to switch to walking mode
- scale_back_speed [double] [default=2.0]: Scale backward speed. The laikago walks backward slower than forward given the same command value. To approximately compensate the bias, we scale up the command speed
- scale_side_speed [double] [default=3.0]: Scale side walk speed. The laikago side walk speed is about three times slower than command. To approximately compensate the tracking error, we scale up the command speed
isaac.lidar_slam.Cartographer¶
Description
This component wraps the Google Cartographer LIDAR SLAM library for ISAAC SDK. You can learn more about Cartographer on their webpage: https://google-cartographer.readthedocs.io/en/latest/. Please note that Cartographer is an experimental library and that results may vary depending on the LIDAR you are using or the environment you are trying to map.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Cartographer uses a 2D LIDAR scan to build the map
- Outgoing messages
- (none)
Parameters
- lua_configuration_directory [string] [default=”“]: Folders to search for Cartographer lua scripts, separated by comma
- lua_configuration_basename [string] [default=”“]: File name of the specific Cartographer lua script to load
- output_path [string] [default=”/tmp”]: Folder to write submaps and generated map
- background_size [Vector2i] [default=Vector2i(1500, 1500)]: The size of the canvas for visualizing the map in sight (in grid cells)
- background_translation [Vector2d] [default=Vector2d(-75, -75)]: Translation to apply on background image (in meters)
- num_visible_submaps [int] [default=8]: Only the most recent submaps are visualized with sight for performance reasons.
isaac.lidar_slam.GMapping¶
Description
This component wraps the GMapping LIDAR SLAM library for ISAAC SDK. You can learn more about GMapping on their webpage: https://openslam-org.github.io/gmapping.html. Please note that GMapping is an experimental library and that results may vary depending on the LIDAR you are using or the environment you are trying to map.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: GMapping uses a 2D LIDAR scan to build the map
- odometry [Odometry2Proto]: Odometry can either be read from this message or from the pose tree. The message is only used if the parameter use_pose_tree is set to false, otherwise odometry is read from the pose tree.
- Outgoing messages
- (none)
Parameters
- file_path [string] [default=”/tmp”]: Directory path used to save map snapshots
- sensor_frame [string] [default=”lidar”]: The reference frame in which range scans arriving on the flatscan channel are defined. Sensor needs to be fixed with respect to the robot. Pose is read in the beginning only.
- build_map_period [double] [default=2.0]: How often the map is recomputed, in seconds
- laser_matcher_resolution [double] [default=DegToRad(3.0)]: Resolution to be used in scan matcher angles
- map_x_max [double] [default=100.]: Maximum x value of the initial map
- map_y_max [double] [default=100.]: Maximum y value of the initial map
- map_x_min [double] [default=-100.]: Minimum x value of the initial map
- map_y_min [double] [default=-100.]: Minimum y value of the initial map
- map_resolution [double] [default=0.1]: Distance between each pixel in the map
- max_range [double] [default=32.0]: The maximum range of the lidar. This value should be close to the physical range of the lidar to exploit as much of the available information. This value limits the out of range threshold coming from the input flatscan message.
- map_update_range [double] [default=30.0]: The range within which the map is updated. The update range must be smaller or equal to the maximum range parameter as it relies on the lidar range information. The value chosen allows the tradeoff between the map global consistency and its sharpness.
- number_particles [int] [default=40]: Number of particles used to estimate position of the robot
- linear_distance [double] [default=0.3]: Linear threshold used to attempt scan matching
- angular_distance [double] [default=0.1]: Angular threshold used to attempt scan matching
- use_pose_tree [bool] [default=false]: Whether robot pose is read from pose tree or RX channel
isaac.lqr.AckermannLqrPlanner¶
Description
The AckermannLqrPlanner class computes local plan and outputs the trajectory given the position of the robot and its surroundings. The local plan is computed using the AckermannlLqr. The lqr needs to be configured using PlannerCostBuilder components. There are two special PlannerCostBuilder: BoundedQuadraticCostBuilder, which can be configured to provide the target destination, the config of this component is updated each time a new path is provided; and PolylineDistanceQuadraticCostBuilder, which is used to provide the hint path for the vehicle to follow. It is also updated each time a new path is provided.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- odometry [Odometry2Proto]: Contains the odometry information required for planning (current speed, acceleration, etc.)
- vehicle_state [CompositeProto]: Contains the vehicle steering angle
- global_plan [Plan2Proto]: Contains the target plan the local planner attempts to follow
Outgoing messages
- plan [CompositeProto]: Contains a series of poses to form the trajectory that is optimal to follow
Parameters
- odom_frame [string] [default=”odom”]: Name of the odometry frame the pose information in received state message is relative to
- static_frame [string] [default=”world”]: Name of the static frame used to transform the plan into the odometry frame
- time_between_command [double] [default=0.1]: Step size to be used in integrating the state
- num_controls [int] [default=50]: Number of controls used to generate the path.
- decay [double] [default=0.99]: Other parameters: Decay apply to each steps (decay < 1 means we accord higher importance to the beginning of the path while decay > 1 emphasizes the end of the path).
- wheel_base [double] [default=3.0]: Wheelbase for the bicycle model (distance between front and rear axis in meter)
- state_planner_cost_name [string] [default=]: This is the name of the root PlannerCostBuilder component for the state. The cost of this
builder will be added at each position along the path. Generally you might want to use AdditionCostBuilder as a root of several other cost function. For example you might want to use some of the following costs:
- a DistanceQuadraticCostBuilder which is in charge of staying clear from obstacles
- a RangeConstraintsCostBuilder to set up boundary limits on the speed and angular speed
- a PolylineDistanceQuadraticCostBuilder to set the path to follow
- a BoundedQuadraticCostBuilder to set the target position to reach
- a DotProductCostBuilder to set a reward for moving in a given direction
- etc…
Add more PlannerCostBuilder to the root builder to extend the cost function.
- control_planner_cost_name [string] [default=]: This is the name of the root PlannerCostBuilder component for the controls. The cost of this builder is computed on the control vector and is added for each position along the path. The default PlannerCostBuilder is a RangeConstraintsCostBuilder, but you can modify the cost function or add new ones by using a AdditionCostBuilder (see state_planner_cost_name description for more information on how you can customize a cost function).
- final_planner_cost_name [string] [default=]: This is the name of PlannerCostBuilder components. The cost of this builder is computed on the state vector for final position. There is no default one used, but you can add one (see state_planner_cost_name description for more information on how you can customize a cost function).
- path_cost_name [string] [default=]: Name fo the PolylineDistanceQuadraticCostBuilder in state planner cost to set the global path for the plan to follow.
- target_cost_name [string] [default=]: Name fo the BoundedQuadraticCostBuilder in the planner cost to set the target pose.
- stop_robot_on_collision [bool] [default=true]: Force the robot to stop when it’s colliding. This can prevent the robot from moving when it gets too close to an obstacle (a wall for example) if the robot model is not accurate enough or if the localization is off. If set to false, the robot will try to move away from the colliding state, and if it fails it will stop.
- plan_visualization_width [double] [default=]: If set, plan will have this width on Sight
- plan_visualization_color [Vector4ub] [default=(Vector4ub{118, 185, 0, 100})]: Color to visualize plan with
isaac.lqr.DifferentialBaseLqrPlanner¶
Description
The DifferentialBaseLqrPlanner class computes and outputs the local plan given the position of the robot and its surroundings. The local plan is computed using the DifferentialLqr. The lqr can be configured using PlannerCostBuilder components, see xxxx_planner_cost_name description below for more information.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- odometry [Odometry2Proto]: Contains the odometry information required for planning (current speed, acceleration, etc.)
- global_plan [Plan2Proto]: Contains the target plan the local planner attempts to follow
Outgoing messages
- plan [DifferentialTrajectoryPlanProto]: Contains a series of poses to form the trajectory that is optimal to follow
Parameters
- time_between_command_ms [int] [default=100]: Step size to be used in integrating the state
- lqr_iterations [int] [default=100]: Upper limit on the number of iterations made by the lqr solver
- line_search_iterations [int] [default=50]: Upper limit on the number of iterations made by the lqr solver during the line search.
- lqr_solver_minimum_change [double] [default=1e-5]: The minimum change in the lqr score before stopping the solver.
- num_controls [int] [default=50]: Upper limit on the number of steps in the output trajectory plan
- min_num_controls [int] [default=5]: Lower limit on the number of steps in the output trajectory plan (only used if the flag use_adaptive_num_controls is set to true).
- use_adaptive_num_controls [bool] [default=false]: If true, the number of controls will adapt to the distance (but remain between min_num_controls and num_controls).
- adaptive_num_controls_factor [double] [default=5.0]: If use_adaptive_num_controls is true, this parameter will help to compute the number of control points to use: num_controls = adaptive_num_controls_factor * distance / (max_speed * dt)
- gain_lat_acceleration [double] [default=0.2]: parameters for the DifferentialLqr parameters Gain of a quadratic cost to penalize the lateral acceleration
- gain_to_target [double] [default=0.1]: Gain of a custom cost to penalize the robot according to its distance to the target
- gain_to_end_position_x [double] [default=20.0]: Gain of a quadratic cost to penalize the last position in forward/backward direction relative to the target
- gain_to_end_position_y [double] [default=50.0]: Gain of a quadratic cost to penalize the last position in lateral direction relative to the target
- gain_to_end_angle [double] [default=1.0]: Gain of a quadratic cost to penalize the robot if its orientation does not match the target
- gain_to_end_speed [double] [default=10.0]: Gain of a quadratic cost to penalize the robot if it is still moving
- gain_to_end_angular_speed [double] [default=10.0]: Gain of a quadratic cost to penalize the robot if it is still rotating
- max_speed [double] [default=0.75]: Soft limit on how fast we would like to move
- min_speed [double] [default=-0.0]: Soft limit on how slow we are allowed to move
- distance_to_target_sigma [double] [default=1.0]: Other parameters: Parameter that controls the strength of the gradient depending on the distance of the target The error cost is of the form: d^2/(d^2 + s^2). It behaves as a quadratic cost close to the target and as a constant value far away from the target.
- decay [double] [default=1.01]: Decay apply to each steps (decay < 1 means we accord higher importance to the beginning of the path while decay > 1 emphasizes the end of the path).
- distance_to_waypoint [double] [default=1.0]: Maximum distance the end of the plan needs to be from a waypoint before trying to move to the next waypoint.
- angle_to_waypoint [double] [default=DegToRad<double>(20.0)]: Maximum angle the end of the plan needs to be from a waypoint before trying to move to the next waypoint.
- state_planner_cost_name [string] [default=]: This is the name of the root PlannerCostBuilder component for the state. The cost of this
builder will be added at each position along the path. Generally you might want to use AdditionCostBuilder as a root of several other cost function. For example differential_base_control.subgraph.json is adding:
- a DistanceQuadraticCostBuilder which is in charge of staying clear from obstacles
- a RangeConstraintsCostBuilder to set up boundary limits on the speed and angular speed
- a RangeConstraintsCostBuilder to set some extra smaller penalties.
You can add more PlannerCostBuilder to the root builder to extend the cost function.
control_planner_cost_name [string] [default=]: This is the name of the root PlannerCostBuilder component for the controls. The cost of this builder is computed on the control vector and is added for each position along the path. The default PlannerCostBuilder is a RangeConstraintsCostBuilder, but you can modify the cost function or add new ones by using a AdditionCostBuilder (see state_planner_cost_name description for more information on how you can customize a cost function).
final_planner_cost_name [string] [default=]: This is the name of PlannerCostBuilder components. The cost of this builder is computed on the state vector for final position. There is no default one used, but you can add one (see state_planner_cost_name description for more information on how you can customize a cost function).
path_cost_name [string] [default=]: Name fo the PolylineDistanceQuadraticCostBuilder in state planner cost to set the global path for the plan to follow.
use_predicted_position [bool] [default=true]: Indicates whether or not the predicted position or actual position is used while planning. If true, this produces a more stable path, however it relies on a good controller to keep the robot on track. If false, then this codelet also acts as a controller.
reset_robot_position [int] [default=0]: How frequently (in term of ticks) do we reset the robot position to the odometry: - 0: Disable it, never reset the robot position unless use_predicted_position is set to false. - 1: Always reset the robot position regardless of the value of use_predicted_position. - 10 Assuming use_predicted_position = true, means every 10 ticks we reset the robot position
to where the odometry predict the robot to be.
max_predicted_position_error [double] [default=0.5]: The distance from the predicted position we tolerate. If we exceed this value, the actual robot position is used.
manual_mode_channel [string] [default=”“]: Channel publishing whether or not the robot is in manual mode
print_debug [bool] [default=false]: Specifies whether to show extra information in Sight for debug purposes
reuse_lqr_plan [bool] [default=true]: Specifies whether or not to use the previous plan as starting point for the lqr
restart_planning_cycle [int] [default=10]: How frequently (in term of ticks) do we restart the planning from scratch: - 0: Disable it, never restart it (unless reuse_lqr_plan is set to false) - 1: Never reuse the plan (regardless of the value of reuse_lqr_plan). - 10 Assuming reuse_lqr_plan = true, means every 10 ticks we drop the previous plan and
replan from a stopped position.
static_frame [string] [default=”world”]: Name of a frame which is static. This is used to compensate for the odometry drift.
stop_robot_on_collision [bool] [default=true]: Force the robot to stop when it’s colliding. This can prevent the robot from moving when it gets too close to an obstacle (a wall for example) if the robot model is not accurate enough or if the localization is off. If set to false, the robot will try to move away from the colliding state, and if it fails it will stop.
plan_visualization_width [double] [default=]: If set, plan will have this width on Sight
plan_visualization_color [Vector4ub] [default=(Vector4ub{118, 185, 0, 100})]: Color to visualize plan with
isaac.lqr.MultiJointLqrPlanner¶
Description
Implementation of MultiJointPlannerInterface. Computes a local plan for a list of joints in a kinematic tree object given their current position and speed and their target position and (optionally) speed. The local plan is computed using the LQR simple control planner.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- acceleration_max [VectorXd] [default=]: Maximum acceleration for each joint for lqr solver
- acceleration_min [VectorXd] [default=]: Minimum acceleration for each joint for lqr solver
- speed_max [VectorXd] [default=]: Maximum speed for each joint for lqr solver
- speed_min [VectorXd] [default=]: Minimum speed for each joint for lqr solver
- gain_control [double] [default=1.0]: Gain for the control (to prevent high change of acceleration) for lqr solver
- gain_final_state [double] [default=10.0]: Gain for the the final state (for position, speed and acceleration) for lqr solver
- gain_limits [double] [default=100.0]: Gain when the speed or acceleration is outside the target range for lqr solver
isaac.map.AdditionFlatmapCost¶
Description
Helper FlatmapCostCombination that takes a list of flatmap cost and adds them together. If any of the flatmap cost returns an invalid position, then this codelet will also return the position as invalid.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.map.DirectedAreaFlatmapCost¶
Description
Cost map layer that contains an area represented by a polygon and a primary target direction. For any robot pose inside the polygon, the orientation of the robot is checked. The more the robot orientation differs from the target direction of the polygonal area, the higher the cost to pass through it. The goal is to provide a map with regions with “preferred” directions that are only passed through at a high cost (if no alternatives are available).
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- color [Vector3ub] [default=(Vector3ub{120, 240, 120})]: Color to visualize the polygon in the map.
- arrows_distance [Vector2d] [default=(Vector2d{1.1, 1.1})]: The distance between two arrows, in rows and columns.
- arrow_length [double] [default=0.8]: The length of the vector arrow.
- arrow_opening_angle [double] [default=0.6]: The opening angle of the vector arrow.
- polygon [std::vector<Vector2d>] [default=]: Corners of the polygon covering the directed area.
- target_angle [double] [default=]: The angle of the area’s direction.
- penality_angle [double] [default=1.0]: The scaling factor of the cost associated to the delta angle.
- center_tendency_weight [double] [default=10.0]: Weight for how strong the tendency is to go to the center of the polygon.
- half_cost_fraction [double] [default=0.25]: Determines at which lateral displacement fraction to either side of the polygon relative to its center half of the center_tendency_weight should be reached. This decides on the aggressiveness with which a plan is pushed into the middle lane of the directed area.
- inside_offset [double] [default=1.0]: The base offset when close to the polygon.
- outside_weight [double] [default=1.0]: The weight when we are too far away from the polygon.
- inside_backward_penalty [double] [default=]: The penalty associated to backward motion inside the polygon. If empty, then backward is not allowed inside.
- outside_backward_penalty [double] [default=]: The penalty associated to backward motion outside the polygon. If empty, then backward is not allowed outside.
- delta_angle_range [Vector2d] [default=]: It contains the valid range of the angle relative to the target_angle. If not provided all the orientations will be valid.
- use_robot_center [bool] [default=true]: If true, we only use the robot center to decide if a point is inside the area. This is much faster than using the robot_shape. If false, then the robot shape and orientation will be used to decide whether the position is inside the area.
isaac.map.KinematicTree¶
Description
Loads a kinematic tree from file and provides access to the model
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- kinematic_file [string] [default=]: Path to a file to load the kinematic tree from
isaac.map.KinematicTreeToPoseTree¶
Description
Receives current joint states of a kinematic tree and writes the 3d poses of links to the pose tree. The name of each link is used as the output frame of the link on the pose tree. This codelet requires a map::KinematicTree component to access the kinematics model.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- state [CompositeProto]: Current joint state
- Outgoing messages
- (none)
Parameters
- kinematic_tree [string] [default=]: Name of the node containing the map::KinematicTree component
- root_frame [string] [default=]: Name of the root frame of the kinematic tree. This frame is not part of the kinematic tree, since it depends on where the root link of the kinematic tree is mounted on. This config is only used in start.
isaac.map.Map¶
Description
This component is used to mark a node as a map and gives convenient access to the various map layers and also some cross-layer functionality.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- graph_file_name [string] [default=”“]: Filename under which to store the current graph whenever there is an update to the map.
- config_file_name [string] [default=”“]: Filename under which to store the current configuration whenever there is an update to the map.
isaac.map.MapBridge¶
Description
A bridge for communication between map container and WebsightServer
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- request [nlohmann::json]: Request to the MapBridge
Outgoing messages
- reply [nlohmann::json]: Reply from the MapBridge
- Parameters
- (none)
isaac.map.MinimumFlatmapCost¶
Description
Helper FlatmapCostCombination that takes a list of flatmap cost and returns the minimum value. If any of the flatmap cost returns an invalid position, then this codelet will also return the position as invalid.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.map.MultiplicationFlatmapCost¶
Description
Helper FlatmapCostCombination that takes a list of flatmap cost and multiply them together. If any of the flatmap cost returns an invalid position, then this codelet will also return the position as invalid.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.map.ObstacleAtlas¶
Description
A component which holds a virtual representation of obstacles detected around the robot. Currently distance maps and spherical obstacles are available. This component is thread safe and can be accessed from other components without message passing.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- static_frame [string] [default=”world”]: Frame which can be considered static, it is used to do time synchronization of obstacles.
isaac.map.OccupancyFlatmapCost¶
Description
Cost map layer that uses a map. If the robot is in collision with the map, the position will be marked as invalid. Otherwise the cost will increase the closer we get to an obstacle using a quadratic cost:
cost = offset_weight + penality * std::max(0, target_distance - distance)^2
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- map_name [string] [default=”map/isaac.navigation.DistanceMap”]: Name of the component that contains the DistanceMap used to check for detection and get the dimension of the world.
- target_distance [double] [default=0.5]: The distance before we start penalizing the node. Any node further away than this distance to an obstacle won’t be penalized, however if the distance is smaller, then a penality of (sampling_distance^2 - distance^2) * target_distance_penality will be assigned to the position
- penality [double] [default=4.0]: Scale used to penalized node too close to obstacles (see target_distance for more information).
- offset_weight [double] [default=1.0]: Offset weight, this is the minimum weight returned by this function which corresponds to position in the map further away than the target distance.
- backward_penalty [double] [default=]: The penalty associated to backward motion. If empty, then backward is not allowed.
isaac.map.OccupancyGridMapLayer¶
Description
A grid map layer for a map node. It provides access to an occupancy grid map which stores for each cell whether the cell is blocked or free. It also holds a distance map computed based on occupancy grid map which contains the distance to the nearest obstacle for each cell computed based on a given threshold.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- filename [string] [default=]: Filename of grayscale PNG which will be loaded as the occupancy grid map
- cell_size [double] [default=]: Size of one map pixel in meters
- threshold [double] [default=0.4]: Threshold used to compute the distance map. Cells with a value larger than this threshold are assumed to be blocked.
isaac.map.PolygonFlatmapCost¶
Description
Cost map layer that contains areas represented by a list of polygons loaded from a PolygonMapLayer. These areas contains a specific weight to be applied to any position within the area while another weight will be apply to any robot position outside these areas.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- polygon_layer_name [string] [default=]: Name of the component that contains the PolygonMapLayer used to get the list of polygons;
- polygon_names [std::vector<std::string>] [default=]: List of the names of the polygon in the PolygonMapLayer to be used to compute the cost. If it is not provided all the polygons will be used.
- inside_weight [double] [default=]: The weight of a position inside the polygon areas.
- outside_weight [double] [default=]: The weight of a position outside the polygon areas.
- use_robot_center [bool] [default=true]: If true, we only use the robot center to decide if a point is inside the area. This is much faster than using the robot_shape. If false, then the robot shape and orientation will be used to decide whether the position is inside the area.
- inside_backward_penalty [double] [default=]: The penalty associated to backward motion inside the polygon. If empty, then backward is not allowed inside.
- outside_backward_penalty [double] [default=]: The penalty associated to backward motion outside the polygon. If empty, then backward is not allowed outside.
isaac.map.PolygonMapLayer¶
Description
A map layer which holds annotated polygons and provides various methods to access them
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- polygons [json] [default=nlohmann::json::object()]: A json object from configuration containing the polygons.
Layout: { "poly1": { "points": [[<polygon point1>], [<polygon point2>]], }, }
- color [Vector3i] [default=(Vector3i{120, 120, 120})]: Layer color.
- frame [string] [default=”world”]: Frame the polygons are defined in.
- obstacle_max_distance [double] [default=1.5]: The maximum distance to consider to create the obstacle from the polygons
- obstacle_pixel_size [double] [default=0.1]: The resolution of the map used to create the obstacle from the polygons
isaac.map.PolylineFlatmapCost¶
Description
Cost map layer that contains an area represented by a polyline and a width. For any robot position we look for the closest point on the polyline, we deduce the target direction using the direction of the closest edge. If the robot is close enough to the polyline the cost is given by:
cost = inside_offset + penality_angle * (angle - target_angle)^2 + penality_distance * dist^2
- If the robot is too faraway, the cost is:
- cost = outside_weight
In addition the area might have an optional validity range. If the robot orientation is outside the provided range, the position is considered invalid.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- color [Vector3ub] [default=(Vector3ub{120, 240, 120})]: Color to visualize the polygon in the map.
- polyline [std::vector<Vector2d>] [default=]: Color to visualize the areas in the map.
- width [double] [default=]: The distance we consider the position to be influenced by the polyline
- penality_angle [double] [default=1.0]: The scaling factor of the cost associated to the delta angle.
- penality_distance [double] [default=1.0]: The scaling factor of the cost associated to the distance to the polyline.
- inside_offset [double] [default=1.0]: The base offset when close to the polyline.
- outside_weight [double] [default=1.0]: The weight when we are too far away from the polyline
- inside_backward_penalty [double] [default=]: The penalty associated to backward motion close to the polyline. If empty, then backward is not allowed in the area of influence of the polyline.
- outside_backward_penalty [double] [default=]: The penalty associated to backward motion far from the polyline. If empty, then backward is not allowed.
- delta_angle_range [Vector2d] [default=]: It contains the valid range of the angle relative to the target_angle. If not provided all the orientations will be valid.
isaac.map.Spline¶
Description
A component which holds a Catmull-Rom spline. If a file name is set, this component will read spline points from file. A function to publish the spline as json message is provided.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- json [JsonProto]: Outgoing message that defines this spline
Parameters
- filename [string] [default=]: If set, spline points will be read from file. Expected layout:
- {
- “keypoints”: [
- [2.3, 4.5], [1.1, 7.5], [0.0, -4.5], [-2.2, 0.1]
], “knot”: 0.5
}
isaac.map.SurfletModelAtlas¶
Description
A database which provides access to surflet models. A surflet model is a set of surflets and each surflet has a point location, a normal for that point location, and a radius of influence.
Surflet models are loaded from a cask file where they are stored using CompositeProto. They can be written to a cask file for example by using the cask Python API. Each surflet model is stored as one CompositeProto which need to contain the following three quantities:
(“point”, kPosition, 3) (“normal”, kNormal, 3) (“radius”, kNone, 1)When surflets are loaded they are stored as an Eigen matrix to allow efficient mathematical operations. Surflets are stored as seven doubles as columns in a matrix. For each surflets the following values are stored in this order: [px, py, pz, nx, ny, nz, r].
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- cask [string] [default=]: Cask filename which contains surflet models.
isaac.map.SurfletModelPublisher¶
Description
A codelet which periodically publishes surflet models stored in a SurfletModelAtlas component. This component can be used to give event-driven components access to surflet models. For additional information on formats see SurfletModelAtlas.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- surflets [CompositeProto]: The requested surflet model as a CompositeProto.
Parameters
- model_atlas [string] [default=]: Name of surflet model atlas database. This has the form node_name/component_name.
- model_names [std::vector<std::string>] [default=]: List of names of surflet models in atlas database
- model_frame_names [std::vector<std::string>] [default=]: List of names of coordinate frames to use for surflet models
- reference_frame_name [string] [default=]: Name of reference frame of the published surflet model
- tick_count [int] [default=1]: Number of times to publish before reporting success.
isaac.map.WaypointMapLayer¶
Description
A map layer which holds annotated waypoints and provides various methods to access them
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- waypoints [json] [default=nlohmann::json::object()]: A json object from configuration containing the waypoints.
Layout: { "wp1": { "pose": [1,0,0,0,0,0,0], "radius": 0.5 }, "wp3": { "pose": [1,0,0,0,0.1,-1.2,0], "color": [54.0, 127.0, 255.0] } }
isaac.message_generators.BinaryTensorGenerator¶
Description
TensorAssignmentGenerator creates lists of assignment tensors based on the input dimensions It provides option to select between random assignment where every value in a row can be either 0 or 1 and uniform assignment where every value is 1.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- assignment [TensorProto]: Produces tensor of type int with entries set randomly to 0 or 1.
Parameters
- dimensions [Vector2i] [default=Vector2i(1, 100)]: Dimensions of the generated rank 2 tensor
- probability [double] [default=0.5]: Probability for assignment
isaac.message_generators.CameraGenerator¶
Description
CameraGenerator publishes left and right color images and a left depth image with made up data.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- color_left [ImageProto]: Random left color image
- color_right [ImageProto]: Random right color image
- depth [ImageProto]: Random depth image
Parameters
- rows [int] [default=1080]: The number of rows for generated data
- cols [int] [default=1920]: The number of columns for generated data
isaac.message_generators.CameraIntrinsicsGenerator¶
Description
This codelet reads parameterized camera intrinsics information and encodes CameraIntrinsicsProto message containing camera intrinsic information. This codelet along with ImageLoader can be used for example to create mock up tests when no camera hardware is available.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ImageProto]: Output message will be published with the acqtime of this input message
Outgoing messages
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
Parameters
- distortion_model [string] [default=”brown”]: Image undistortion model. Must be ‘brown’ or ‘fisheye’
- focal_length [Vector2d] [default=]: Focal length in pixels
- optical_center [Vector2d] [default=]: Optical center in pixels
- distortion_coefficients [Vector5d] [default=Vector5d::Zero()]: Distortion coefficients
isaac.message_generators.ConfusionMatrixGenerator¶
Description
Binarizes the segmentation output based on a user-defined threshold
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- confusion_matrix [ConfusionMatrixProto]: Output segmentation prediction with regulated probabilities
- Parameters
- (none)
isaac.message_generators.Detections2Generator¶
Description
Codelet to publish mock bounding box detections. It takes a user defined JSON which specifies the class and the bounding box rectangle coordinates for the detections and creates a Detections2Proto message out of it.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- mock_detections [Detections2Proto]: Output mocked detection proto message
Parameters
- detection_configuration [json] [default={}]: Parameter defining the configuration of the detections we need to mock.
- Format: [ { “class_label”: “A”, “confidence”: 0.8,
- “bounding_box_coordinates”: [0.0, 0.0, 100.0, 100.0] } ]
The bounding box coordinates are of the form (x1, y1, x2, y2)
isaac.message_generators.Detections3Generator¶
Description
Publishes Detections3Proto based on the pose from pose tree It also adds additional noise to perturb the output
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- output_poses [Detections3Proto]: publish dummy poses to test pose refinement
Parameters
- pose [Pose3d] [default=]: Randomized pose to be added to dummy pose to generate different start positions
- label [string] [default=”object”]: object label param for pose
- reference [string] [default=”camera”]: reference label param for pose
- num_detections [int] [default=1]: Number of detections at output
isaac.message_generators.DifferentialBaseControlGenerator¶
Description
Generates periodic differential base states with specified parameters
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- command [StateProto]: A StateProto representing a navigation::DifferentialBaseControl state which is populated with values specified via parameters
Parameters
- linear_speed [double] [default=0.0]: Linear speed in outgoing state message
- angular_speed [double] [default=0.0]: Angular speed in outgoing state message
isaac.message_generators.DifferentialBaseStateGenerator¶
Description
Generates periodic differential base states with specified parameters
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- state [StateProto]: Output state of differential base (DifferentialBaseDynamics)
Parameters
- linear_speed [double] [default=1.0]: Linear speed in outgoing state message
- angular_speed [double] [default=0.1]: Angular speed in outgoing state message
- linear_acceleration [double] [default=-0.1]: Linear acceleration in outgoing state message
- angular_acceleration [double] [default=0.05]: Angular acceleration in outgoing state message
isaac.message_generators.FlatscanGenerator¶
Description
FlatscanGenerator publishes a FlatscanProto periodically. Angles are parameterized. Ranges can be optionally randomized by adding an alice::Random component to the same node and setting a range_standard_deviation.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- flatscan [FlatscanProto]: Outgoing “flat” range scan
Parameters
- invalid_range_threshold [double] [default=0.2]: Beams with a range smaller than or equal to this distance are considered to have returned an invalid measurement.
- out_of_range_threshold [double] [default=100.0]: Beams with a range larger than or equal to this distance are considered to not have hit an obstacle within the maximum possible range of the sensor.
- beam_count [int] [default=1800]: Number of beams in outgoing message
- angles_range [Vector2d] [default=Vector2d(0.0, TwoPi<double>)]: Azimuth angle range for the beams
- range_mean [double] [default=20.0]: Mean value for the ranges.
- range_standard_deviation [double] [default=]: Standard deviation for the range values. Requires an alice::Random component in the same node.
isaac.message_generators.HolonomicBaseControlGenerator¶
Description
Publishes holonomic command with desired speed values
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- command [StateProto]: A StateProto representing a navigation::HolonomicBaseControls state which is populated with values specified via parameters
Parameters
- speed_angular [double] [default=0.0]: Angular speed in counter-clockwise direction
- speed_linear_x [double] [default=0.0]: Linear speed in forward direction
- speed_linear_y [double] [default=0.0]: Linear speed in left direction
isaac.message_generators.ImageFeatureGenerator¶
Description
ImageFeatureGenerator generates an animated chessboard image with 2d features selected. The 2d features are the centers of the cells. On each tick, the image moves at right down corner.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- image [ImageProto]: Output mockup image
- coordinates [TensorProto]: Output keypoint coordinates
- features [TensorProto]: Output features ids. A rank-1 int tensor with dimensions (number of features).
Parameters
- cell_size [int] [default=50]: Chessboard cell size in pixel
- image_rows [int] [default=512]: Image height in pixels
- image_cols [int] [default=512]: Image width in pixels
- animation_speed [float] [default=2.0]: Chessboard shift per tick in pixels
isaac.message_generators.ImageLoader¶
Description
Reads images from file systems and outputs them as messages. This can for example be used to create mock up tests when no camera hardware is available. This codelet encodes the raw image as a ImageProto message containing the RGB image and a CameraIntrinsicsProto message containing camera intrinsic information.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- color [ImageProto]: Output the color camera image proto
- depth [ImageProto]: Output the depth image proto
Parameters
- color_filename [string] [default=]: Path of the color image file. The image is expected to be a 3-channel RGB PNG.
- depth_filename [string] [default=]: Path of the depth image file. The image is expected to be a 1-channel 16-bit grayscale PNG.
- color_glob_pattern [string] [default=]: Path of the color image directory. The directory is expected to contain only 3-channel RGB PNG.
- loop_images [bool] [default=true]: The images in the specified directory plays in a loop if set to true. Otherwise it plays once.
- sort_by_number [bool] [default=false]: The images in the directory are sorted by NUMBER when they are ‘NUMBER.jpg’ or ‘NUMBER.png’.
- depth_scale [double] [default=0.001]: A scale parameter to convert 16-bit depth to f32 depth
- distortion_model [string] [default=”brown”]: Image undistortion model. Must be ‘brown’ or ‘fisheye’
- focal_length [Vector2d] [default=]: Focal length in pixels
- optical_center [Vector2d] [default=]: Optical center in pixels
- distortion_coefficients [Vector5d] [default=Vector5d::Zero()]: Distortion coefficients (see the DistortionProto in Camera.capnp for details)
- min_depth [double] [default=0.0]: Minimum depth
- max_depth [double] [default=10.0]: Maximum depth
isaac.message_generators.LatticeGenerator¶
Description
Creates a lattice proto which represents a grid map. This information includes the cell size, name of the lattice frame and dimensions of the lattice in pixels. The codelet also computes the pose of the reference frame with respect to the lattice using the dimensions of the lattice and the relative offset of the reference frame with respect to the lattice. This pose is set in the pose tree.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- gridmap_lattice [LatticeProto]: Output lattice proto. This contains relevant information about the corresponding gridmap.
Parameters
- cell_size [double] [default=0.05]: Parameter defining the cell size in metres.
- dimensions [Vector2i] [default=Vector2i(256, 256)]: The dimensions of the grid map in pixels
- lattice_frame_name [string] [default=”gridmap_frame”]: The name of the lattice coordinate frame. This will be used to write the pose of the gridmap relative to the reference frame in the pose tree.
- reference_frame_name [string] [default=”ref”]: Name of the reference frame
- relative_offset [Vector2d] [default=Vector2d(0.0, -0.5)]: Percentage offset of robot relative to the map. The offset determines the position of the robot (or the reference frame) with respect to the grid map created. The origin of the grid map is considered to be at the top-left of the grid. The x parameter defines the percentage offset for the rows (positive is in the upward direction and negative is in the downward direction), and the y parameter defines the offset for the columns (positive is in the left direction and negative is in the right direction). Determining the offset using a percentage basis makes it agnostic to the dimensions of the map. The default value fixes the reference frame at the top-center of the grid map.
isaac.message_generators.PanTiltStateGenerator¶
Description
Generates periodic PanTiltState with specified pattern
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- target [StateProto]: Output, List of AprilTag fiducials
Parameters
- pan_max_angle [double] [default=1.0471975511965976]: Max panning on one side in Rad
- pan_offset_angle [double] [default=0.0]: Degree offset for panning
- pan_speed [double] [default=0.1]: Speed for panning in round/second
- pan_mode [WaveMode] [default=WaveMode::kSinus]: Wave function for panning
- tilt_max_angle [double] [default=0.5235987755982988]: Max tilting on one side in degree
- tilt_offset_angle [double] [default=0.0]: Degree offset of tilting
- tilt_speed [double] [default=0.1]: Speed for tilting in round/second
- tilt_mode [WaveMode] [default=WaveMode::kSinus]: Wave function for tilting
isaac.message_generators.ParkingSpotListGenerator¶
Description
Publishes messages to simulate parking spot perception. A set of num_detections spots are created in a row along the +X axis.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- parking_spot_list [ParkingSpotListProto]: Output the generated parking spot list
Parameters
- num_detections [int] [default=3]: The number of parking spot detections to generate
- spot_width [double] [default=3.0]: The width(meters) of the generated parking spot detections
- spot_length [double] [default=6.0]: The length (meters) of the generated parking spot detections
- spot_pitch [double] [default=3.1]: The pitch (meters) between the generated parking spot detections
- spot_entry_line_index [int] [default=3]: The entry line index (0-3) for the generated parking spot detections
isaac.message_generators.Plan2Generator¶
Description
Publishes a plan which is populated with the waypoints specified via parameters. Waypoints can be either listed as poses directly, or as names of the frames to look-up from the PoseTree. This codelet also publishes the last waypoint of the plan as goal message.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- plan [Plan2Proto]: The plan generated as specified via parameters
- goal [Goal2Proto]: Last waypoint of the plan
Parameters
- waypoints [std::vector<Pose2d>] [default=]: List of waypoint poses in the form of (angle, x, y).
Example configuration: “waypoints”: [
[4.15, 26.1, 9.58], [1.578, 26.14, 14.75]] Either ‘waypoints’ or ‘frames’ parameter needs to be set.
- frames [std::vector<std::string>] [default=]: List of waypoints as frame names defined in PoseTree. Either ‘waypoints’ or ‘frames’ parameter needs to be set.
- plan_frame [string] [default=”world”]: Frame for the waypoints. Sets the plan frame in outgoing message.
- static_frame [string] [default=”world”]: Name of a frame that is not moving. Used to decide whether a new plan message needs to be published.
- report_success [bool] [default=false]: Report success after publishing the plan
- erase_path_on_stop [bool] [default=true]: Whether or not to erase the path in sight when stopping
- new_message_threshold [Vector2d] [default=Vector2d(1e-3, DegToRad(0.01))]: A new message will be published whenever change in poses exceeds this threshold. Values are for Euclidean distance and angle respectively.
isaac.message_generators.PointCloudGenerator¶
Description
Generate point cloud messages to send out.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- point_cloud [PointCloudProto]: Outgoing proto messages used to publish the point cloud messages.
Parameters
- point_count [int] [default=10000]: Total number of point to generate.
- point_per_message [int] [default=100]: Maximum number of points in a single given message.
- has_normals [bool] [default=false]: Whether there should be normals in the messages, as many as the number of points.
- has_colors [bool] [default=false]: Whether there should be colors in the messages, as many as the number of points.
- has_intensities [bool] [default=false]: Whether there should be intensities in the messages, as many as the number of points.
isaac.message_generators.Polyline2Generator¶
Description
Publishes messages to simulate detected polyline. The provided prototype polyline is published on every tick().
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- polyline [Polyline2dProto]: Output generated polyline
Parameters
- prototype [std::vector<Vector2d>] [default={}]: The prototype of the generated polyline
isaac.message_generators.PoseGenerator¶
Description
PoseGenerator creates a series of poses which moves by step every tick
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- rhs_frame [string] [default=]: Name of the reference frame of the right side of the pose
- initial_pose [Pose3d] [default=Pose3d::Identity()]: Initial pose
- step [Pose3d] [default=Pose3d::Translation({1.0, 0.0, 0.0})]: The pose delta for every tick
isaac.message_generators.RangeScanGenerator¶
Description
RangeScanGenerator publishes a RangeScanProto periodically. Simulates a range scan in a radial-symmetric world with desired parameters.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- scan [RangeScanProto]: Outgoing range scan
Parameters
- azimuth_angle_range [Vector2d] [default=Vector2d(0.0, TwoPi<double>)]: Azimuth angle range for the beams. (2pi, 0) would produce counter-clockwise rotation.
- num_slices [int] [default=16]: Number of (horizontal) ray slices that cover azimuth_angle_range
- num_slices_per_message [int] [default=0]: Number of (horizontal) ray slices published with each message. 0 means publish num_slices each message. Needs to be smaller than num_slices.
- vertical_beam_angles [std::vector<double>] [default=std::vector<double>({DegToRad(-15.0), DegToRad(-7.0), DegToRad(-3.0), DegToRad(-1.0), DegToRad(+1.0), DegToRad(+3.0), DegToRad(+7.0), DegToRad(+15.0)})]: The (vertical) beam angles to use for every slice
- max_range [double] [default=100.0]: Out of range threshold
- min_range [double] [default=0.0]: Invalid range threshold
- range_domain_max [double] [default=110.0]: Max value of the range domain. Used when normalizing range values.
- delta_time [int] [default=50‘000]: Delay in microseconds between firing. Default is 20 Hz.
- intensity_denormalizer [double] [default=1.0]: Scale factor which can be used to convert an intensity value from an 8-bit integer to meters.
- height [double] [default=1.0]: The height of the lidar over the ground plane
- segments [std::vector<geometry::LineSegment2d>] [default={}]: Lines in range / height plane which define the world
Layout: [
[ [-100.0, 0.0], [100.0, 0.0] ], [ [ 0.0, 0.0], [ 20.0, 2.0] ]]
isaac.message_generators.RigidBody3GroupGenerator¶
Description
Publishes messages with a single body using configured values
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- bodies [RigidBody3GroupProto]: Output group with a single body
Parameters
- body_name [string] [default=”dummy_body”]: Name of the body
- reference_frame [string] [default=”world”]: Reference frame for the body
- pose [Pose3d] [default=Pose3d::Identity()]: Pose of the body with respect to the reference frame
- linear_velocity [Vector3d] [default=Vector3d::Zero()]: Linear velocity of the body
- angular_velocity [Vector3d] [default=Vector3d::Zero()]: Angular velocity of the body
- linear_acceleration [Vector3d] [default=Vector3d::Zero()]: Linear acceleration of the body
- angular_acceleration [Vector3d] [default=Vector3d::Zero()]: Angular acceleration of the body
isaac.message_generators.TensorGenerator¶
Description
TensorGenerator creates lists of tensors from nothing.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- sample [TensorProto]: Produced random list of tensors with the specified dimensions
Parameters
- dimensions [Vector3i] [default=Vector3i(3, 640, 480)]: Dimensions of the generated rank 3 tensor
- element_type [TensorGeneratorElementType] [default=TensorGeneratorElementType::kFloat32]: The element type for the tensor
isaac.message_generators.TrajectoryListGenerator¶
Description
TrajectoryListGenerator publishes a trajectory with made up data. The fake trajectory is a vertical helix, in 3D, centered on the reference frame origin, spinning around the Z axis.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- trajectories [Vector3TrajectoryListProto]: The output channel to send all generated trajectories.
Parameters
- frame [string] [default=”world”]: Reference frame for the generated trajectories.
- position_count [int] [default=60]: Number of positions in the generated trajectory.
- helix_radius [double] [default=5.0]: The radius of the vertical helix created as the made up trajectory.
- position_delta_angle [double] [default=0.1]: The delta angle between each positions in the generated trajectory.
isaac.ml.BoundingBoxPadding¶
Description
This codelet takes detections as input and pad the detections with random number of pixels picked from uniform distribution within a range specified by the input parameters.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_detections [Detections2Proto]: Input list of bounding boxes that need to be padded
Outgoing messages
- output_detections [Detections2Proto]: Detections with padded bounding boxes
Parameters
- bbox_padding_range [Vector2i] [default=]: If set, the bounding box is padded with pixels and the number of pixels padded on each side of the bbox is given by a random number in the range specified by the parameter with uniform distribution. If set to [3, 7] - random padding on each side with pixel number between 3 and 7. If set to [3, 3] - bounding box is padded by 3 pixels on each of the 4 sides.
- image_dimensions [Vector2i] [default=]: Dimensions of the image corresponding to the bounding box This parameter is needed to ensure that the bounding boxes after padding are within the image dimensions.
isaac.ml.ColorCameraEncoderCpu¶
Description
Encodes the color image from a ImageProto into a tensor. The codelet can downsample the image, normalize pixel values, and transform to planar storage order.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rgb_image [ImageProto]: Input RGB color image
Outgoing messages
- tensor [TensorProto]: A rank 3 tensor with image data normalized and transformed according to parameters.
Parameters
- rows [int] [default=960]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the height dimension of the downsampled image.
- cols [int] [default=540]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the width dimension of the downsampled image.
- pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kNone]: Type of Normalization to be performed.
- tensor_index_order [TensorTransposeOp] [default=TensorTransposeOp::k012]: The indexing order, default is {row, column, channel}
isaac.ml.ColorCameraEncoderCuda¶
Description
Encodes the color image from a ImageProto into a tensor. The codelet can downsample the image, normalize pixel values, and transform to planar storage order. Uses GPU-accelerated functions.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rgb_image [ImageProto]: Input RGB color image
Outgoing messages
- tensor [TensorProto]: A rank 3 tensor with image data normalized and transformed according to parameters.
Parameters
- rows [int] [default=960]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the height dimension of the downsampled image.
- cols [int] [default=540]: The image is resized before it is encoded. Currently, only downsampling is supported for this. Number of pixels in the width dimension of the downsampled image.
- keep_aspect_ratio [bool] [default=true]: The aspect ration of the image is preserved during resizing, the ROI is centered and padded.
- pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kUnit]: Type of Normalization to be performed. Todo: Add additional normalization modes besides unit for cuda
- tensor_index_order [TensorTransposeOp] [default=TensorTransposeOp::k012]: The indexing order, default is {row, column, channel}
isaac.ml.ConfusionMatrixAggregator¶
Description
Accumulates evaluation metrics of an object detection algorithm across multiple images. Sums the confusion matrices of two ObjectDetectionMetricsProto messages. Each time a new metrics message is received, this codelet updates the accumulated metrics and publishes them in an outgoing ObjectDetectionMetricsProto message.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- sample_metrics [ConfusionMatrixProto]: incoming metrics message
Outgoing messages
- accumulated_metrics [ConfusionMatrixProto]: outgoing accumulated metrics message
Parameters
- confusion_matrix_slice_index [int] [default=0]: Index to specify which slice of the confusion matrix we want to visualize. The slicing is done along the third dimension. Hence each slice of the matrix represents a 2D tensor which is a single confusion matrix for a particular intersection over union threshold.
isaac.ml.Detection3Encoder¶
Description
Encodes 3D detections into a tensor
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detection3 [Detections3Proto]: Input detection proto.
Outgoing messages
- tensor [TensorProto]: A tensor of dimensions (N, 8) where N is the number of rigid bodies. Channels are as follows:
Columns 0-2 are translations in the order (px, py, pz), Columns 3-6 are orientations in quaternions in the order (qw, qx, qy, qz), Column 7 is the class id of the rigid body Example: {{px_1, py_1, pz_1, qw_1, qx_1, qy_1, qz_1, id_1},
…, {px_N, py_N, pz_N, qw_N, qx_N, qy_N, qz_N, id_N}}
Parameters
- class_names [std::vector<std::string>] [default={}]: List of class names to detect as string
isaac.ml.DetectionComparer¶
Description
Evaluates the object detection predicted output as compared to the ground truth. Academic standards for evaluating object detection models involve computing the confusion matrix parameters of the predicted detections over a range of intersection over union thresholds. Intersection over union between two bounding boxes is calculated by the following formula - (Area of intersection between the two bounding boxes) / (Area of their union) Thus, it defines the degree of overlap between two bounding box rectangles. If the two bounding boxes are prefectly overlapped, the intersection over union score for them would be 1.0.
This codelet computes a confusion matrix for each sample, of dimensions (num_classes + 1) * (num_classes + 1) * num_iou_thresholds. The element at (i, j, k) represents the number of detections in that sample which had ground truth class label as class i and were predicted as class j using intersection over union threshold of k. The last element of the 0th and 1st dimensions represent the background class (bg). An example confusion matrix for classes A and B for a single intersection over union threshold would be -
A B bgA ( 2 0 0 ) B ( 0 3 0 ) bg ( 0 0 0 )
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- ground_truth_detection [Detections2Proto]: Ground truth object detection input
- predicted_detection [Detections2Proto]: Predicted detection input
Outgoing messages
- metrics [ConfusionMatrixProto]: Output containing the evaluated metrics for the detections in a single sample.
The output message contains the following information - * Number of image samples over which the metrics were computed. * List of intersection over union thresholds over which the metrics were computed. * A 3D tensor representing the confusion matrices calculated over these intersection over union
thresholds. Each two-dimensional slice of the tensor along the third axis represents the confusion matrix computed for a particular intersection over union threshold.
Parameters
- intersection_over_union_thresholds [std::vector<double>] [default=std::vector<double>({0.5, 0.8, 0.95})]: List of intersection over union thresholds over which the metrics are computed
- class_names [std::vector<std::string>] [default={}]: The allowed class names for the ground truth and predicted detections. The confusion matrix is constructed by assigning an index to each of these classes and one for the background class. If a sample contains any class other than the ones specified here, it gets dropped.
isaac.ml.DetectionEncoder¶
Description
Encodes detection for input into the object detection neural network.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detection [Detections2Proto]: Input detection proto.
Outgoing messages
- tensor [TensorProto]: Detection encoded as a (N, 5) tensor where N is the number of bounding boxes. Channels are: (bb_min_x, bb_min_y, bb_max_x, bb_max_y, class_id).
Parameters
- class_names [json] [default={}]: The class names of our detection objects.
- area_threshold [double] [default=10.0]: The minimum area of bounding boxes
isaac.ml.Detections3Comparer¶
Description
Helper codelet that compute the rotation and translation error between two poses and output as JSON.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- reference_poses [Detections3Proto]: Reference pose
- predicted_poses [Detections3Proto]: Predicted pose
Outgoing messages
- statistics [JsonProto]: Outputs statistics about the reference_poses and the predicted_poses.
Parameters
- symmetry_rotations [std::vector<SO3d>] [default={}]: List of rotation symmetry of the object For example, settin gparam to [{ “axis”: [0, 0, 1], “angle_degrees”: 180 }] indicates the object is symmetric about z axis and the symmetry angle is 180 degrees. TODO: Extend to multiple object classes
isaac.ml.FilterDetections¶
Description
Filters detections by matching lists of included or excluded label names, and then outputs the top N bounding boxes with largest area common with the region of interest for those labels.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_detections [Detections2Proto]: A list of detections which may have different labels.
Outgoing messages
- output_detections [Detections2Proto]: A subset of the input detections, filtered by matching the included and excluded labels.
Parameters
- allowlist_labels [std::vector<std::string>] [default=]: The labels for which to include only detections by string match. Includes all if not set. NOTE Set either allowlist_labels or denylist_labels, not both.
- denylist_labels [std::vector<std::string>] [default=]: The labels for which to exclude some detections by string match. Excludes none if not set. NOTE Set either allowlist_labels or denylist_labels, not both.
- max_detections [int] [default=1]: Maximum number of detections to output. If number of incoming detections exceeds this number, smaller detections are filtered out.
- image_cols [int] [default=1280]: Width of the image corresponding to the detections. This is used to compute the metric used to filter the top N detections.
- focus_factor [double] [default=0.5]: Fraction of the image width. This parameter is used to filter the top N detections with maximum area in the region of interest.
isaac.ml.GenerateKittiDataset¶
Description
This codelet generates a dataset in KITTI format by saving images and detections to disk. It creates a set of corresponding images and labels for training, and a set of images for testing. The KITTI format label file for each image contains one line per object in the image, followed by a set of values. This codelet populates only the bounding box values for each image, leaving the rest as 0.0. For more info on this “lite” KITTI format, see: https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#kitti_file
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ImageProto]: Image input (must be a three-channel color image of image3ub)
- detections [Detections2Proto]: Detections associated with the image input
- Outgoing messages
- (none)
Parameters
- num_training_samples [int] [default=1000]: The total number of training samples to generate
- num_testing_samples [int] [default=100]: The total number of testing samples to generate
- path_to_dataset [string] [default=]: Path to the root of the KITTI dataset. See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#kitti_file for file structure and organization.
isaac.ml.HeatmapDecoder¶
Description
Converts a tensor representing heatmap values to a HeatmapProto type This codelet has the inverse functionality of the HeatmapEncoder codelet Please refer HeatmapEncoder.hpp for details on the message types
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensor [TensorProto]: Input tensor containing heatmap of probabilities
Outgoing messages
- heatmap [HeatmapProto]: Output heatmap proto
Parameters
- grid_cell_size [double] [default=2.0]: Cell size (in metres) of every pixel in heatmap
- map_frame [string] [default=”world”]: The pose map frame for the heatmap
isaac.ml.HeatmapEncoder¶
Description
Encodes a heatmap as a tensor.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- heatmap_proto [HeatmapProto]: Input heatmap proto containing heatmap image of probabilities
Outgoing messages
- heatmap_tensor [TensorProto]: Heatmap encoded as a tensor
- Parameters
- (none)
isaac.ml.ImageDetectionExtraction¶
Description
Encodes the color or segmentation image from ImageProto into a tensor. The codelet crops the image according to input detections, downsample the cropped image, normalize pixel values, and transform to planar storage order.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections2Proto]: Input list of bounding boxes to crop to. Labels and confidence are not used.
- image [ImageProto]: Input RGB color image or segmentation image that needs to converted to tensor
Outgoing messages
- tensor [TensorProto]: Cropped and resized output as batch of image tensors with dimension (Nd, rows, cols, Nc) or (Nd, Nc, rows, cols) depending on the TensorIndexOrder if Nc =3. (Nd, rows, cols) if input image proto contains a single channel image. (Nc = 1) Nd is the number of input detection, rows, cols are the rows and columns of the output image and Nc is number of channels which is either 3 for rgb or 1 for segmentation image.
Parameters
- downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.
- pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kNone]: Type of Normalization to be performed. Valid modes are {kNone, kUnit, kPositiveNegative, kHalfAndHalf}
- tensor_index_order [TensorTransposeOp] [default=TensorTransposeOp::k012]: The indexing order of the output tensor, default is {row, column, channel}
isaac.ml.LabelToBoundingBox¶
Description
Computes bounding boxes from given class and instance label images. All clusters of pixels with the same instance and class label are grouped together and an image AABB is computed. All bounding boxes are then published together.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- class_segmentation [ImageProto]: Input image with class labels.
- instance_segmentation [ImageProto]: Input image with instance labels.
- class_labels [LabelProto]: Label mapping of class indices to class names
Outgoing messages
- detections [Detections2Proto]: Computed bounding boxes - one AABB for every occuring instance/label combination.
Parameters
- resolution [int] [default=1]: The target resolution when computing bounding boxes. A value of 1 means bounding boxes are pixel-accurate. A value of 3 would mean bounding boxes are accurate up to 3 pixels.
- min_bbox_size [int] [default=1]: Minimum size in pixels across the two dimensions of the rectangle to be considered as non-zero size bounding box. A value of 1 means the bounding box rectangle length and breadth must be at least one pixel.
isaac.ml.ResizeDetections¶
Description
This codelet performs resizing of bounding boxes. Bounding boxes are defined using coordinates relative to the image of the detection, and if images are resized then their associated bounding boxes must also be resized accordingly.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections2Proto]: Input detections associated with an image of dimensions specified by the input_image_dimensions ISAAC_PARAM
Outgoing messages
- resized_detections [Detections2Proto]: Output detections associated with an image of dimensions specified by the output_image_dimensions ISAAC_PARAM
Parameters
- input_image_dimensions [Vector2d] [default=]: Resolution of the image (rows, cols) that the input detections were computed for.
- output_image_dimensions [Vector2d] [default=]: Resolution of the image (rows, cols) that the output detections should be transformed to.
isaac.ml.RigidbodyToDetections3¶
Description
This codelet reads in list of 3D rigid bodies with poses in Isaac SDK coordinate frame, converts the poses to reference frame as one of the input rigid bodies if needed and publishes the list of 3D rigid body poses in reference frame as Detections3Proto.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rigid_bodies [RigidBody3GroupProto]: Input list of 3D rigid body poses in Isaac SDK coordinate frame
Outgoing messages
- detections [Detections3Proto]: Output list of 3D rigid body poses with respect to desired rigid body coordinate frame from the input list. Index of this reference rigid body in the input list is given by input parameter, ref_frame_id.
Parameters
- ref_frame_id [int] [default=0]: Index of the rigid body in input list that is used as reference coordinate frame for publishing the poses of all the rigid bodies in the input list. If ref_frame_id < 0, object poses are published with respect to Isaac SDK coordinate frame.
isaac.ml.SampleAccumulator¶
Description
Collects training samples and makes them available for machine learning.
Each sample contains of a list of tensors. Tensors must currently be based on 32-bit floats. This codelet does not use macros to define input channels. Instead input channels are created based on the parameter channel_names.
Note: SampleAccumulator processes one sample at a time which might lead to message loss with many channels at a high data rate.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- sample_buffer_size [int] [default=256]: Number of training samples to keep in the buffer
- randomize_samples [bool] [default=true]: Randomize the order of samples in the buffer when true
- channel_names [std::vector<std::string>] [default={“samples”}]: Names of input channels. A sample will contain one tensor for each input channel in the given order.
isaac.ml.SegmentationComparer¶
Description
Evaluates the segmentation predicted output as compared to the ground truth. This codelet computes a confusion matrix for each sample, of dimensions (num_classes + 1) * (num_classes + 1) * num_thresholds. The element at (i, j, 0) represents the number of pixels in that sample which had ground truth class label as class i and were predicted as class j. The last element of the 0th and 1st dimensions represent the pixels that do not belong to any of the classes in consideration (N/A). Typically the pixels which belong to this category are - 1. The ones which have an index higher than the number of classes in consideration, as determined
by the number_of_classes parameter below.2. The ones which were assigned the index of the unknown class in TensorArgMax. An example confusion matrix for classes A and B would be -
A B N/AA ( 2 0 0 ) B ( 0 3 0 ) N/A ( 0 0 0 )
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- ground_truth [TensorProto]: Ground truth object segmentation input
- prediction [TensorProto]: Predicted segmentation input
Outgoing messages
- metrics [ConfusionMatrixProto]: Output containing the evaluated metrics for the segmentations in a single sample.
Parameters
- argmax_threshold [double] [default=0.5]: The discretization threshold that was used to convert a 3 dimensional tensor to a 2 dimensional tensor in TensorArgMax. This parameter is repeated here so as to fill the “thresholds” parameter in the ConfusionMatrixProto. During evaluation, it’s important to know the confidence threshold that was used to decide if a prediction was valid or not, since the confusion matrix produced could vary depending on this threshold. Hence, although the inference results provided to this comparer codelet have already been filtered based on a threshold, we repeat the parameter for information so that it can be propagated downstream.
- number_of_classes [int] [default=0]: Number of classes expected from ground truth and prediction. This information is needed to build a confusion matrix of the appropriate size and to determine the class indices to compare. For example, if this value is set to 2, we’d consider classes 0 and 1 while comparing the ground truth and predicted segmentations.
isaac.ml.SegmentationDecoder¶
Description
Convert tensor to segmentation prediction
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensors [TensorProto]: The input tensor contains semantic segmentation label prediction where each pixel has a probability distribution over all classes. Dimensions: (rows, cols, number of classes)
Outgoing messages
- segmentation_prediction [SegmentationPredictionProto]: Output segmentation prediction proto which contains the class information
Parameters
- class_names [json] [default={}]: name of the classes in an array. Each class is represented by a string. The number of classes must match the number of classes in the tensor input.
isaac.ml.SegmentationEncoder¶
Description
Encodes segmentation for input into the object segmentation neural network.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- class_segmentation [ImageProto]: Input segmentation image.
- labels [LabelProto]: Label mapping for the input segmentation image
Outgoing messages
- tensor [TensorProto]: Ouput tensor encoding the segmentation image. Dimensions are (row, cols, channels) where the number of channels is 1 if the number of classes is 2 (as in binary segmentation), or (n + 1) if the number of classes is ‘n’ (as in multiclass segmentation).
Parameters
- rows [int] [default=256]: Height of the downsampled segmentation image
- cols [int] [default=512]: Width of the downsampled segmentation image
- offset [int] [default=1]: Offset by which the actual pixel value differs from the label index. The segmentation image comes with a per-pixel integer labelling which denotes the object it belongs to. This integer is in turn mapped to a string label name in the format <object_name>:<label_index>. In some cases (such as NavSim), the label index could be equal to the pixel value, and in other cases (such as Isaac Sim) they could differ by a fixed offset value.
- class_label_indices [std::vector<int>] [default=]: The pixel values that are to be encoded as valid classes, in case the labels are not provided. These are only used in the absence of the strings labels to be encoded, and if we specify the input mode as “NoLabelsAvailable”
- input_mode [InputMode] [default=InputMode::kLabelsAvailable]: Expected data input mode. When the string labels are available, the mode would be “LabelsAvailable” When there are no string labels available, the mode would be “NoLabelsAvailable”. In this case, the encoder looks for the pixel values which are to be encoded as valid classes.
- output_type [OutputMode] [default=OutputMode::kDistribution]: Parameter defining the format of the output tensor. If the mode is “Index”, we publish a 3D integer tensor, where each element at position (row, col, 0) represents the index of the class that the corresponding pixel at position (row, col) belongs to. If the mode is “Distribution”, we publish a tensor representing the probability distribution over the classes.
- class_label_names [std::vector<std::string>] [default={}]: A list of string labels representing the classes which need to be encoded. Typically, the input proto message contains - * An image where each pixel has a numerical value that represents its class. * The mapping of these numerical values to their string labels. We might want to encoded a subset of these classes in the output tensor. This subset is determined by the class_label_names parameter. Pixels which belong to classes other than the ones specified in this parameter are counted as “everything else”.
isaac.ml.Teleportation¶
Description
Teleportation is a class that generates random poses and sends them to an actor group codelet. Output pose is generated in 4 steps:
- relative_frame: This optional pose can be supplied as an input message. It is useful when chaining Teleportation codelets.
- base_pose: Pose is picked in one of the two modes:
- box mode: Uniform randomly pick each pose value from given ranges, i.e., yaw angle is between min_yaw and max_yaw.
- spline mode: Uniform randomly pick a pose that is tangent to the given spline.
- noise_pose: Gaussian noise generated using given mean and standard deviation values.
- offset_pose: Pose applied to transform frames. Supplied by user as a parameter.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- relative_frame [Pose3dProto]: Proto used to receive a reference frame (pose)
Outgoing messages
- rigid_command [RigidBody3GroupProto]: Proto used to publish rigid body pose to the sim bridge
- relative_frame_cmd [Pose3dProto]: Proto used to publish rigid body pose to another teleportation codelet as a reference frame
Parameters
- enable_on_relative_frame [bool] [default=false]: Flag to tick on relative frame message
- name [string] [default=”“]: Name of actor to teleport
- min_scale [double] [default=0.0]: Mimimum multiplicational scale factor of corresponding objects in simulation
- max_scale [double] [default=1.0]: Maximum multiplicational scale factor of corresponding objects in simulation
- enable_scale [bool] [default=false]: Flag to enable scale
- base_mode [BaseMode] [default=BaseMode::kBox]: Parameter for step 1. Specifies how the base pose will be generated. Please see codelet summary for a list of modes.
- min [Vector3d] [default=Vector3d::Zero()]: Parameter for “box” mode of step 1. Minimum translation in X, Y, Z coordinates
- max [Vector3d] [default=Vector3d(1.0, 1.0, 1.0)]: Parameter for “box” mode of step 1. Maximum translation in X, Y, Z coordinates
- enable_translation_x [bool] [default=true]: Parameter for “box” mode of step 1. Flag to enable translation (X)
- enable_translation_y [bool] [default=true]: Parameter for “box” mode of step 1. Flag to enable translation (Y)
- enable_translation_z [bool] [default=true]: Parameter for “box” mode of step 1. Flag to enable translation (Z)
- min_roll [double] [default=0.0]: Parameter for “box” mode of step 1. Minimum roll change after a teleoperation
- max_roll [double] [default=TwoPi<double>]: Parameter for “box” mode of step 1. Maximum roll change after a teleoperation
- enable_roll [bool] [default=false]: Parameter for “box” mode of step 1. Flag to enable rotation (roll)
- min_pitch [double] [default=0.0]: Parameter for “box” mode of step 1. Minimum pitch change after a teleoperation
- max_pitch [double] [default=TwoPi<double>]: Parameter for “box” mode of step 1. Maximum pitch change after a teleoperation
- enable_pitch [bool] [default=false]: Parameter for “box” mode of step 1. Flag to enable rotation (pitch)
- min_yaw [double] [default=0.0]: Parameter for “box” mode of step 1. Minimum yaw change after a teleoperation
- max_yaw [double] [default=TwoPi<double>]: Parameter for “box” mode of step 1. Minimum yaw change after a teleoperation
- enable_yaw [bool] [default=false]: Parameter for “box” mode of step 1. Flag to enable rotation (yaw)
- spline_distance [double] [default=0.02]: Parameter for “spline” mode of step 1. We will travel for this fraction of the spline distance before uniformly randomly sampling a new point on the spline again.
- spline_speed [double] [default=0.005]: Parameter for “spline” mode of step 1. Speed of travel. Unit is fraction of spline length per second. Negative speed corresponds to driving backwards.
- spline_flip_probability [double] [default=0.5]: Parameter for “spline” mode of step 1. With this probability, the direction of the tangent will be flipped.
- translation_standard_deviation [Vector3d] [default=Vector3d::Zero()]: Parameter for step 2. A noise for the translation with this standard deviation will be applied.
- roll_pitch_yaw_standard_deviation [Vector3d] [default=Vector3d::Zero()]: Parameter for step 2. A noise for the angles with this standard deviation will be applied.
- offset_pose [Pose3d] [default=Pose3d::Identity()]: Parameter for step 3. Offset pose to be applied to the combined pose.
isaac.ml.TensorArgMax¶
Description
Converts a rank 3 tensor (rows, columns, channels) to a rank 2 tensor (rows, columns) based on a user-defined threshold. Conversion involves determining the channel index with the maximum value at each (row, col) location and checking if this value is greater than the user-defined threshold. If it is greater than the threshold, the element at the corresponding (row, col) location in the 2 dimensional tensor is assigned the argmax index. Else it is assigned the unknown class index.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input [TensorProto]: Input rank-3 tensor with dimensions (rows, columns, channels).
Outgoing messages
- argmax [TensorProto]: Output rank-2 tensor with dimensions (rows, columns) which stores the index of the max channel.
Parameters
- threshold [double] [default=0.5]: Threshold which defines if the argmax channel index for a pixel will be assigned to the 2 dimensional tensor. If the value at the argmax index is less than this threshold, the corresponding element in the discretized tensor is assigned the unknown class index.
- non_max_index [int] [default=-1]: Value to be assigned for the unknown class.
isaac.ml.TensorChannelSum¶
Description
Takes a 3 dimensional tensor as input and processes it to create an image with 2 channels. The value of each pixel in the target image is computed by summing up the user-defined channels. An element at the position (row, col, 0) in the output image is computed by summing up the elements in the tensor channel indices specified in the parameter channel_zero_class_indices. Similarly, the element at the position (row, col, 1) in the output image is computed by adding the elements in the tensor channel indices specified in the parameter channel_one_class_indices.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensor [TensorProto]: Input tensor having dimensions (rows, columns, channels)
Outgoing messages
- image [ImageProto]: Output image having dimensions (rows, columns, 2)
Parameters
- channel_zero_class_indices [std::vector<int>] [default=]: Channel indices of the input tensor which are to be added to compute the pixel values in channel 0 of the output image.
- channel_one_class_indices [std::vector<int>] [default=]: Channel indices of the input tensor which are to be added to compute the pixel values in channel 1 of the output image.
isaac.ml.TensorRTInference¶
Description
This codelet loads a frozen neural network model into memory, generates an optimized TensorRT engine, evaluates the model using tensors of type TensorProto received on RX channels, and publishes the network’s output tensors on TX channels of type TensorProto.
Please refer to Tensorflow inference for an explanation on how to setup input and output channels.
Note: TensorRT always uses planar storage order for images, and not interleaved storage.
Note: Batch dimension is optional, i.e. both (1, 3, 480, 640) and (3, 480, 640] are allowed.
See the Machine Learning Workflow section of the Development Guide for more information.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- input_tensor_info [json] [default={}]: Input Tensor Information in JSON, example:
[{"operation_name":"input", "dims":[1,3,480,640]}]
- output_tensor_info [json] [default={}]: Output Tensor Information in JSON example:
[{ "operation_name": "output", "dims": [1,1000] }]See the Machine Learning Workflow section of the Development Guide for more information.
- model_file_path [string] [default=]: Path to the frozen model, in .uff, .onnx, or .etlt formats. Note also: engine_file_path NOTE: UFF DEPRECATION warning. NVIDIA is deprecating UFF Parser for TensorFlow from TensorRT 7. It is tested and functional, but NVIDIA plans to remove the support in the subsequent major release. Please plan to migrate your workflow to ONNX.
- engine_file_path [string] [default=]: Path to the CUDA engine which is used for inference (input or location for the cached engine)
- etlt_password [string] [default=]: Password used to decrypt the model if it is of ETLT format (optional).
- force_engine_update [bool] [default=false]: Force update of the CUDA engine, even if input or cached .plan file is present. Debug feature.
- inference_mode [InferenceMode] [default=InferenceMode::kFloat16]: Parameter to define the inference mode. The default value is Float16
- max_batch_size [int] [default=]: Maximum batch size. The default value could be infered from input_tensor_info parameter. Note, if the batch size in the input_tensor_info is variable (-1), this is a required parameter. NOTE: Variable batch size is currently not supported for ONNX. NVIDIA plans support to be available in the subsequent release via the dynamic shape API.
- max_workspace_size [int64_t] [default=67108864]: Maximum workspace size. The default value is 64MB
- plugins_lib_namespace [string] [default=]: TensorRT plugins library namespace, optional, set to enable plugins. Note, an empty string is a valid value for this parameter and it specifies the default TensorRT namespace.
- device_type [DeviceType] [default=DeviceType::kGPU]: The default device that the network will execute on: GPU or DLA. NOTE: The DLA acceleration is available on Jetson Xavier, NX and x64 Volta/Turing+ GPUs. This option is not yet officially supported in Isaac SDK. For Jetson builds it requires to switch linkage from “-nodla-tar-xz” into “-dla-tar-xz” in the third_party/packages.bzl file.
- dla_core [int64_t] [default=0]: The DLA core to execute on. Where dlaCore is a value between 0 and getNbDLACores() - 1. (ignored, when the device_type parameter is set to GPU)
- allow_gpu_fallback [bool] [default=true]: Allow fallback to GPU, if this layer/network can’t be executed on DLA. (ignored, when the device_type parameter is set to GPU)
- verbose [bool] [default=false]: Enable verbose log output, this option enables logging of DNN optimization progress, it is disabled by default, as the output of TensorRT optimization results in too many log messages even for LOG_LEVEL_DEBUG level.
isaac.ml.TensorReshape¶
Description
Reshapes a tensor to new dimensions. Number of elements out output tensor must match the number of
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_tensors [TensorProto]: This tensor will be reshaped to the desired dimensions.
Outgoing messages
- output_tensors [TensorProto]: The tensor with based on input data but with the desired dimensions
Parameters
- output_tensor_dimensions [std::vector<int>] [default={}]: tensor shape information for each tensor in the list. It must be an array of arrays where the number of arrays must equal to the number of tensors in input_tensors. If shape is set to -1 in any dimension, it is automatically computed to match the total number of input tensor elements. For example, if input_tensors has dimensions [100, 20] and param is set to [10, -1, 20], then the reshaped output tensor has dimensions [10, 10, 20].
isaac.ml.TensorflowInference¶
Description
A codelet to run inference for a Tensorflow model.
The codelet loads the model specified with the parameters model_file_path and config_file_path in the start function. The expected name and shape of input and output channels is defined via the parameters input_tensor_info and output_tensor_info.
This codelet does not use macros to define input and output channels. Instead channels are automatically setup based on the information in input_tensor_info and output_tensor_info parameters. By default the ops_name is used as the channel name. However sometimes this name is too long or not a valid identifier. In that case the channel name can be specified via channel. Valid channel names must only contain alpha-numeric characters or underscores. For example consider the following configuration for input_tensor_info:
[ { "ops_name": "layer4/misc/baseline", "channel": "misc_baseline", "index": 1, "dims": [1, 20, 30, 2] }, { "ops_name": "image", "index": 0, "dims": [1, 276, 276, 3] } ]This will generate two input channels with names misc_baseline and image. These names can be used directly in graph JSON files to specify edges. For example:
"edges": [ { "source": "camera/Video4Linux/color", "target": "object_detection/TensorflowInference/image" }, ...Warning: Currently only 32-bit floating point tensors are accepted as input and output will always be 32-bit floating point tensors.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- input_tensor_info [json] [default={}]: Information about the name and shape of inputs to the Tensorflow model in JSON format. This information is used to setup input channels. Example:
[ { "ops_name": "input", "index": 0, "dims": [1, 224, 224, 3] } ]
- output_tensor_info [json] [default={}]: Informatino about the name and shape of outputs of the Tensorflow model in JSON format. This information is used to setup output channels.
- model_file_path [string] [default=]: Model_data with contents from specified file
- config_file_path [string] [default=]: Config_data with contents from specified file
isaac.ml.TorchInference¶
Description
This codelet loads a trained Torch model and runs inference with the model.
This codelet does not use macros to define input and output channels. Instead channels are automatically setup based on the rx_channel_names and tx_channel_names parameters. Valid channel names must only contain alpha-numeric characters or underscores.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- model_file_path [string] [default=]: Path to the Torch model file
- message_time_to_live [double] [default=]: Messages waiting in the queue for more than this duration will be skipped.
- rx_channel_names [std::vector<std::string>] [default={“input”}]: Name of input channels. By default a single channel named input
- tx_channel_names [std::vector<std::string>] [default={“output”}]: Name of output channels. By default a single channel named output
isaac.ml.WaitUntilDetection¶
Description
Accepts the detections for the current camera frame as input and checks for a user-defined label. If the label is found, it publishes the detections for all the labels that we are looking for, in that particular scene. Also includes a parameter which lets the user define how many times a label would need to be detected before it is considered a true positive.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_detections [Detections2Proto]: Detections from the object detection inference components. The codelet parses this list to look for the presence of one or more user-defined class labels.
Outgoing messages
- output_detections [Detections2Proto]: Output detections containing the bounding boxes in the scene for the class/classes in question. The output is published when at least one of the classes that we are interested in is detected the required number of times.
Parameters
- labels_to_match [std::vector<std::string>] [default=]: Names of the labels to look for in the predicted detections.
- required_detection_number [int] [default=1]: The number of times an object needs to be detected before we consider it as a true positive
isaac.monitor.MaxValueGradeFusion¶
Description
Selects the largest element from a set of evaluation results and returns it.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.monitor.Monitor¶
Description
Monitors a set of criteria fed in by individual evaluator components running outside of this monitor. All evaluation grades are combined by a grade fusion codelet running in the same node as this monitor, which decides how to fuse the evaluation grades. It returns an overall fused grade value, allowing the Monitor instance to derive action from it. The Monitor reports a failure when the fused grade exceeds the Monitor’s fused_grade_threshold parameter.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- evaluator_max_age [double] [default=2.0]: Maximum time the absence of messages from any one evaluator is tolerated that was present at some point, in seconds.
- fused_grade_threshold [double] [default=1.0]: Threshold deciding when the monitor reports a failure. If the fused grade is above this threshold, a failure is reported.
- settle_time [double] [default=3.0]: Amount of time (in seconds) to give components for settling after an initial ramp-up phase. Evaluator grades are not judged during this time period after starting the Monitor.
isaac.object_pose_estimation.BoundingBoxEncoder¶
Description
Encodes the detections and scales the bounding box parameters such that the pixel coordinates along x are in the range [-1, 1]. This is the pre-processing step before passing the bounding box parameters into trt inference or training scripts for pose estimation cnn model.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detection [Detections2Proto]: List of input bounding box detections that are encoded as tensors
Outgoing messages
- tensor [TensorProto]: Detection encoded as a (N, 4) tensor where N is the number of bounding boxes. Channels are: (bb_min_x, bb_min_y, bb_max_x, bb_max_y).
Parameters
- image_dimensions [Vector2i] [default=Vector2i(720, 1280)]: Image dimensions - (rows, cols) that the detections correspond to Bounding box parameters are scaled such that the row pixel coordinates are in the range [-1, 1].
isaac.object_pose_estimation.CodebookLookup¶
Description
A codelet representing a mapping from a code vector to a feature vector. For a given feature vector the n closest code vectors can be looked up. Both code and feature vectors are vectors of floating point numbers. Currently the only supported distance function is the dot product. The code book is loaded from a JSON file when the codelet starts.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- features [TensorProto]: A batch of feature vectors encoded as a rank 2 tensor. Each feature will be checked against
- the codebook. The shape of the tensor is:
- (feature, elements).
Outgoing messages
- codes [TensorProto]: A rank 3 tensor which stores the code vectors for each input feature vector. For each feature
- vector num_output code vectors are returned. The shape of the tensor is:
- (feature, code, elements).
- correlations [TensorProto]: A rank 3 tensor which stores the correlation between code vectors and the input feature vector.
- There is one correlation per output code vector. The shape of the tensor is:
- (feature, correlation).
Parameters
- codebook_path [string] [default=]: Path to the file containing the codebook in line JSON format.
- The codebook must contain one line per code word with the following format:
- [[f_1, f_2, …, f_n], [c_1, c_2, …, c_m]]
Here (f_1, …, f_n) is the n-dimensional feature vector and (c_1, …, c_m) the m-dimensional code vector. The dimension of feature and code vectors must be identical for all entries.
- num_output [int] [default=2]: The number of features to extract. Only the best one will be provided
isaac.object_pose_estimation.CodebookPoseSampler¶
Description
Generates camera poses for codebook generation assuming the object under observation is placed at the origin. Uses icosahedron subdivision to uniformly sample viewpoints from a sphere of given radius, and additionally applies in-plane rotation (roll) along the camera axis. Publishes one sampled poses per tick.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- pose [Pose3dProto]: Pose to teleport the camera to
Parameters
- radius [std::vector<double>] [default={}]: List of radii of the spheres where the view points are sampled This enables codebook generation at multiple distances.
- center [Vector3d] [default=Vector3d::Zero()]: Center of the sphere where the view points are sampled Spheres of all radii are centered at this location.
- num_view [int] [default=2562]: Minimal number of view points on the view sphere
- num_inplane [int] [default=1]: Number of in-plane rotations at each view point
- min_roll [double] [default=-Pi<double>]: Minimum roll for for codebook view sampling from sphere Note: Minimum roll must be in range [-Pi, Pi]
- max_roll [double] [default=Pi<double>]: Maximum roll for codebook view sampling from sphere Note: Maximum roll must be in range (min_roll, min_roll + 2*Pi]
- min_pitch [double] [default=-Pi<double>/2]: Minimum pitch for codebook view sampling from sphere Note: Minimum pitch must be in range [-Pi/2, Pi/2]
- max_pitch [double] [default=Pi<double>/2]: Maximum pitch for codebook sampling from sphere Note: Maximum pitch must be in range (min_pitch, Pi/2].
- min_yaw [double] [default=-Pi<double>]: Minimum yaw for codebook view sampling from sphere Note: Minimum yaw must be in range [-Pi, Pi],
- max_yaw [double] [default=Pi<double>]: Maximum yaw for codebook view sampling from sphere Maximum yaw must be in range (min_yaw, min_yaw + 2*Pi]
- report_success [bool] [default=false]: If report_success is true, codelet reports success after view points are sampled. If not, the codelet ends the app after all view points are sampled.
- max_ticks_after_success [int] [default=10]: Number of ticks to wait after the view points are sampled to close the app This param is used when report_success is set to false.
isaac.object_pose_estimation.CodebookWriter¶
Description
Receives a feature vector and a label vector and outputs them as a JSON object. This codelet can be used to create a code book for the CodebookLookup codelet.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- feature [TensorProto]: A rank-2 tensor containing the feature vector
- code [TensorProto]: A rank-1 tensor containing the code vector
Outgoing messages
- codebook [JsonProto]: A JSON array with two entries, one each for feature and code vector. Features and code vectors themselves are stored as arrays of floating point numbers.
- Parameters
- (none)
isaac.object_pose_estimation.DopeDecoder¶
Description
This codelet receives input from the DOPE network model and outputs the 3D pose as Detections3Proto. Detects multiple instances of the same object from 2d keypoints (peaks in the input maps) that corresponds to corners and centroids of the bounding box, then use pnp to calculate the 3d pose. Implementation based on https://github.com/NVlabs/Deep_Object_Pose/tree/master/src/dope/inference
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- maps [TensorProto]: Input belief and vector maps
- intrinsics [CameraIntrinsicsProto]: Input camera pinhole parameters corresponding to the input of DOPE network
Outgoing messages
- output_poses [Detections3Proto]: Pose output for the object estimated from the input belief map
Parameters
- label [string] [default=]: Name for the prediction label in Detections3Proto
- box_dimensions [Vector3d] [default=]: The bounding box dimensions of the object in m
- map_peak_threshold [float] [default=0.01f]: Threshold for finding peak candidates in the gaussian-blurred belief maps
- peak_weight_threshold [float] [default=0.1f]: Threshold for peaks in the original (unblurred) belief map to identify an object
- affinity_map_angle_threshold [float] [default=0.5f]: Threshold for the angle between affinity map vector field and the corresponding corner to centroid peak. The corner peak is associated with the centroid peak (belong to the same object) when the angle is below this threshold.
- min_weight_sum [float] [default=1e-6]: Minimum acceptable sum of averaging weights to filter the peaks in belief map.
- gaussian_sigma [double] [default=3.0]: The standard deviation of the Gaussian blur of the belief map before peak finding.
isaac.object_pose_estimation.GeneratePoseCnnDecoderData¶
Description
This codelet generates a validation dataset for Pose Cnn Decoder by saving images, bounding box parameters of detections and corresponding pose labels to disk. This codelet assumes one-to-one correspondence of the image, detection and label and is used generally for single input image, detection and pose per tick.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rgb_image_tensors [TensorProto]: Batch of cropped RGB image tensors with dimension (Nd, rows, cols, 3). Nd is the number of input detection, rows, cols are the rows and columns of the output image
- bbox_tensors [TensorProto]: Detection encoded as a (Nd, 4) tensor where Nd is the number of bounding boxes. Channels are: (bb_min_x, bb_min_y, bb_max_x, bb_max_y).
- translation_tensors [TensorProto]: Tensors containing the encoded translation of the object (center_x, center_y, depth) center_x and center_y are the object center x and y in pixel coordinates respectively depth is the object depth wrt camera in meters
- rotation_tensors [TensorProto]: Tensors containing the quaternion rotation of the object in order (q.w, q.x, q.y, q.z)
- segmentation_image_tensors [TensorProto]: Batch of cropped segmentation image tensor with dimension (Nd, rows, cols). Nd is the number of input detection, rows, cols are the rows and columns of the output image
- Outgoing messages
- (none)
Parameters
- rgb_pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kNone]: Type of Normalization to be performed. Valid modes are {kNone, kUnit, kPositiveNegative, kHalfAndHalf}
- segmentation_pixel_normalization_mode [ImageToTensorNormalization] [default=ImageToTensorNormalization::kNone]: Type of Normalization to be performed for the segmentation image. Valid modes are {kNone, kUnit, kPositiveNegative, kHalfAndHalf}
- num_training_samples [int] [default=0]: The total number of training samples to generate. This is set to 0 for online training
- num_validation_samples [int] [default=1000]: The total number of testing samples to generate for validation during training
- path_to_dataset [string] [default=]: Path to the root of the Pose CNN Decoder dataset for training and validation. Two directories with names training and validation are created inside this dataset. 4 folders are generated inside validation directory - images, bbox_inputs, pose_labels training directory contains an additional directory - decoder_labels that stores the segmentation image for decoder training. Images are saved as images/<number>.png, detection saved as bbox_inputs/<number>.txt and pose saved as pose_labels/<number>.txt
isaac.object_pose_estimation.ImagePoseEncoder¶
Description
This codelet encodes rotation and translation information of object during codebook generation. Encoded values are (1) orientation in quaternions from pose, which are used for estimating 3D rotation and (2) bounding box size which is used for estimating object’s distance to the camera.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_intrinsics [CameraIntrinsicsProto]: Input CameraIntrinsicsProto which are used for extracting camera pinhole parameters
- input_detections [Detections2Proto]: List of bounding box detections to compute translation parameters in the codebook
- input_poses [Detections3Proto]: List of input object poses for storing orientation labels in the codebook.
Outgoing messages
- pose_encoding [TensorProto]: A rank-1, 32-bit float tensor with the following nine entries: 1 - 4: quaternions for the 3D orientation, 5: bounding box diagonal 6: rendered distance of the camera from the object along z 7: focal length of the camera used to generate codebook 8, 9: offset of bounding box center from pinhole center in image coordinates
- Parameters
- (none)
isaac.object_pose_estimation.PoseCnnDecoder¶
Description
A codelet that takes inputs from the post estimation cnn TRT model and outputs the 3D pose as Detections3Proto.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- translation [TensorProto]: A batch of translation vectors (cx, cy, depth) encoded as a rank 2 tensor of shape (batch_size, 3). cx, cy are the scaled x and y pixel coordinates of the body center. Coordinates are scaled such that the values along x are in range [-1, 1]. depth is the real world depth in m from camera.
- rotation [TensorProto]: A batch of quaternion vectors of rotation encoded as a rank 2 tensor. Quaternions order: (q.w, q.x, q.y, q.z) The shape of the tensor is: (batch_size, 4).
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
- detections [Detections2Proto]: Input bounding box detections for label data of the 3D detections
Outgoing messages
- output_poses [Detections3Proto]: Pose output for a batch of objects estimated from the input translation and rotation vectors
Parameters
- training_camera_vertical_fov_deg [double] [default=42.0]: Vertical field of view angle of the training camera in degrees
- training_image_dimensions [Vector2i] [default=Vector2i(720, 1280)]: Image dimensions of the training data.
- aspect_ratio_match_tolerance [double] [default=0.1]: Tolerance to test that the aspect ratio of the training and inference images match
isaac.object_pose_estimation.PoseEncoder¶
Description
Encodes 3D detections into tensors with translation and rotation parameters
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detection3 [Detections3Proto]: List of 3D detections of the objects to encode into tensors
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
Outgoing messages
- center_and_depth [TensorProto]: A tensor of dimensions (N, 3) where N is the number of rigid bodies The 3 columns are translation parameters in the order (c_x, c_y, c_depth). c_x, c_y are object center in pixel coordinates along rows and columns respectively, c_depth is the real world object distance from camera in metres.
- rotation [TensorProto]: A tensor of dimensions (N, 4) where N is the number of rigid bodies Four columns are orientations in quaternions in the order (qw, qx, qy, qz),
Parameters
- center_out_of_frame_tolerance [double] [default=10]: Tolerance in pixels for object center to be out of image frame
isaac.object_pose_estimation.PoseEstimation¶
Description
Codelet that given an image and the detections of an object inside this image returns the pose in 3D relative to the camera of the object.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
- input_detections [Detections2Proto]: Input bounding box from any object detection model (YOLO/ResNet).
- codes [TensorProto]: Input code vectors coming from a codebook, assuming a list of quaternion/center offset/diagonal
- correlations [TensorProto]: Correlations for code vectors indicating how good the match was.
Outgoing messages
- output_poses [Detections3Proto]: Output poses and bounding box.
- Parameters
- (none)
isaac.object_pose_estimation.PoseFiducialsPnP¶
Description
This codelet estimates the object pose from a set of fiducials attached to the object. It reads in a list of fiducial detections with respect to the sensor frame, uses the Perspective-n-Point (PnP) solver on:
- 2D projections of fiducial detections corners;
- 3D prior poses of fiducial detections corners from CAD or measurements;
Outputs the final list of object poses, with the respect to the sensor frame.
It also allows to put thresholds on the reprojection error and visible keypoints number and calculates a proxy for object detection confidence estimate, based on the ratio:
(reprojection_error_threshold - reprojection_error) / reprojection_error_threshold
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- intrinsics [CameraIntrinsicsProto]: Intrinsics of color camera containing the Fiducials
- fiducials [FiducialListProto]: Input list of fiducial detections with respect to the sensor frame.
Outgoing messages
- detections [Detections3Proto]: Output list of object poses, with respect to the sensor frame and their class names. The names of the objects are determined by a reference_frames_of_interest parameter. These correspond to the reference frames of objects to which the tags are attached. This reference frame could also be one of the tags.
- refined_fiducials [FiducialListProto]: Output list of PnP constrained fiducial detections with respect to the sensor frame.
Parameters
reference_frames_of_interest [std::vector<std::string>] [default=]: A list of reference frames of interest. The pose tree should contain: * Transforms for fiducials_of_interest_T_reference_frame. This is normally a reference
frame of the object to which the tags are attached, but this could also be one of the tags. This tag does not need to be visible / detected.
NOTE: to populate these transformations in the pose tree, a CAD model or physical measurements could be used. A helper script Isaac/Print activeObject_T_[IsaacTag] is also available through the Isaac Sim (Unity) menu, to determine these transformations, if the simulation model for the object is available.
- Transforms for fiducuals to fiducials corner keypoints. These transforms should be populated by the fiducials detection and pose estimation codelet.
NOTE: unique fiducials are required to be attached to individual reference frame instances.
- reprojection_error_threshold [double] [default=5.0]: Reprojection error threshold, pixels.
- NOTE:
- Pose pose estimates with the reprojection error above this threshold will be filtered out.
- Confidence is set to:
- (reprojection_error_threshold - reprojection_error) / reprojection_error_threshold
- minimum_keypoints_threshold [int] [default=8]: Minimum number of keypoints threshold. NOTE: pose estimates for objects associated with fewer keypoints will be filtered out.
isaac.object_pose_refinement.PoseRefinement¶
Description
This codelet takes image/measurement surflets and the detection pose of an object. It iteratively optimizes the model position based on the pose estimation and minimizes the error between measurement surflets position and model position. Once the model converges or maximum specified iterations is reached, it returns refined pose in 3D relative to camera coordinates.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- surflets [CompositeProto]: Input image superpixels
- raw_poses [Detections3Proto]: Input object poses from any pose estimation model (AutoEncoder/PoseCNN)
- object_assignment [TensorProto]: Input object assignment from SurfletsObjectAssignment component
Outgoing messages
- refined_poses [Detections3Proto]: Output refined poses and prediction labels
Parameters
- line_search_break_on_failure [bool] [default=true]: Line search parameter to specify if the gradient descent algorithm breaks on failed search
- line_search_armijo_factor [double] [default=0.5]: Line search parameter to control backtracking. It varies from (0, 1).
- line_search_max_iterations [int] [default=20]: Line search parameter to limit maximum iterations during line search
- line_search_step_factor [double] [default=0.5]: Line search parameter to control the delta steps in search direction
- line_search_tolerance [double] [default=1e-2]: Line search parameter that determines the convergence criteria between 2 iterations
- gradient_descent_max_iterations [int] [default=50]: Gradient descent parameter that determines the maximum number of line searches performed per optimization
- num_iterations [int] [default=10]: Optimization parameter to limit maximum number of iterations and model to measurement reassignment
- distance_threshold [double] [default=0.1]: Convergence threshold for distance threshold per assignment
- mesh_pose_offset [Pose3d] [default=]: Pose offset added to move from dolly pose to mesh pose
- model_atlas [string] [default=”object_pose_refinement.model_atlas”]: Name of surflet model atlas database. This has the form node_name.component_name.
- assignment_component_name [string] [default=”object_pose_refinement.surflet_assignment”]: Name of surflet assignment component. This has the form node_name.component_name.
- distance_component_name [string] [default=”object_pose_refinement.surflet_distance”]: Name of surflet distance component. This has the form node_name.component_name.
- assignment_distance_component_name [string] [default=”object_pose_refinement.surflet_assignment_distance”]: Name of surflet assignment distance component. This has the form node_name.component_name.
- reference_frame_name [string] [default=]: Name of reference frame of the published surflet model
isaac.orb.ExtractAndVisualizeOrb¶
Description
Codelet to extract and visualize ORB features. Input is an image, Output is the same image overlaid with extracted ORB features
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ImageProto]: Input image
Outgoing messages
- coordinates [TensorProto]: Output keypoint coordinates
- features [TensorProto]: Output features ids. A rank-1 int tensor with dimensions (number of features).
Parameters
- max_features [int] [default=500]: maximum number of features to extract
- fast_threshold [int] [default=20]: FAST threshold, lower means higher sensitivity. Note that this threshold controls how many features are extracted before filtering the features down to the requested number of max_features. Decrease this parameter if the resulting amount of features is too low (that is, constantly below max_features).
- grid_num_cells_linear [int] [default=8]: how many cells to split the image into for spatial regularization
- downsampling_factor [double] [default=0.7]: how much to reduce image size between ORB feature levels
- max_levels [int] [default=4]: maximum number of different ORB levels
isaac.otg5.DualOtg5¶
Description
Planner that generates motion trajectories according to a linear and an angular delta sent by a previous component. The generated trajectory will be smooth w.r.t. the configured limits.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- delta [CompositeProto]: Received delta (dx, da) to use for driving. This is how much we want to travel.
- odometry [Odometry2Proto]: Estimated robot pose and velocity
Outgoing messages
- trajectory [CompositeProto]: Contains time series of poses and velocities to form the trajectory.
Parameters
- trajectory_time_step [double] [default=0.01]: Delta time between states in outgoing trajectory message.
- trajectory_time_length [double] [default=5.0]: Next this many seconds of the trajectory will be published as message.
- robot_frame [string] [default=”robot”]: Name of the robot frame. Read at start only.
- static_frame [string] [default=”world”]: Name of a frame which is static. Read at start only.
- speed_thresholds [Vector2d] [default=Vector2d(0.1, DegToRad(10.0))]: When generating trajectories, predicted values will be used as initial conditions unless the linear and angular speed values from odometry are off by this value. They would be off, for example, if the robot was controlled manually.
- linear_maximum_jerk [double] [default=1.0]: Maximum jerk in m/s^3. Minimum jerk equals negative of this value.
- linear_maximum_allowed_acceleration [double] [default=1.0]: Hard limit for the acceleration profile in m/s^2. Acceleration of a trajectory should not exceed this value at any time. Minimum allowed acceleration equals negative of this value.
- linear_maximum_desired_acceleration [double] [default=0.5]: Desired limit for the acceleration profile in m/s^2. If the initial acceleration exceeds this value, negative jerk will be applied to decrease it. Minimum desired acceleration equals negative of this value.
- linear_maximum_allowed_speed [double] [default=1.0]: Hard limit for the speed profile in m/s. Speed of a trajectory should not exceed this value at any time.
- linear_maximum_desired_speed [double] [default=0.5]: Desired limit for the speed profile in m/s. Robot will slow down if the current speed exceeds this value.
- linear_minimum_allowed_speed [double] [default=-0.2]: Hard limit for the speed profile in m/s. Speed of a trajectory should not be less than this value at any time.
- linear_minimum_desired_speed [double] [default=-0.1]: Desired limit for the speed profile in m/s. Robot will speed up if the current speed is less than this value.
- angular_maximum_jerk [double] [default=1.0]: Maximum jerk in rad/s^3. Minimum jerk equals negative of this value.
- angular_maximum_allowed_acceleration [double] [default=1.0]: Hard limit for the acceleration profile in rad/s^2. Acceleration of a trajectory should not exceed this value at any time. Minimum allowed acceleration equals negative of this value.
- angular_maximum_desired_acceleration [double] [default=0.5]: Desired limit for the acceleration profile in rad/s^2. If the initial acceleration exceeds this value, negative jerk will be applied to decrease it. Minimum desired acceleration equals negative of this value.
- angular_maximum_allowed_speed [double] [default=1.0]: Hard limit for the speed profile in rad/s. Speed of a trajectory should not exceed this value at any time. Minimum allowed speed equals negative of this value.
- angular_maximum_desired_speed [double] [default=0.5]: Desired limit for the speed profile in rad/s. Robot will slow down if the current speed exceeds this value. Minimum desired speed equals negative of this value.
isaac.otg5.GoalToDelta¶
Description
Codelet that calculates the delta between the current robot pose and a goal pose. It outputs the resulting delta to its delta channel as CompositeProto, holding the delta as (dx, dy, da).
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target pose for the robot
Outgoing messages
- delta [CompositeProto]: Delta (dx, dy, da) to send out
Parameters
- robot_frame [string] [default=”robot”]: Name of the robot frame. Read at start only.
- static_frame [string] [default=”world”]: Name of a frame which is static. Read at start only.
isaac.otg5.ObstacleDeltaLimiter¶
Description
Filter codelet that receives a motion delta message of type CompositeProto via its delta_in message channel. It limits the received delta’s translational motion distance to the first point where the robot is too close to an obstacle. The valid part of the delta is forwarded via the delta_out message channel.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- delta_in [CompositeProto]: Incoming delta driving signal to check for collisions
Outgoing messages
- delta_out [CompositeProto]: Delta to send out to OTG5
Parameters
- robot_frame [string] [default=”robot”]: Name of the robot frame. Read at start only.
- static_frame [string] [default=”world”]: Name of a frame which is static. Read at start only.
- robot_model [string] [default=]: Name of the robot model node. If not specified, this codelet will try to find one in the application. If more than one exist it will fail.
- minimum_distance [double] [default=0.2]: How close to obstacle the robot can be (in meters).
- desired_distance [double] [default=0.5]: How far away the robot stays before hitting the error margin before an obstacle. This parameter can be used to keep the robot navigable, to e.g. be able to still take turns or change the path.
- collision_check_step_sizes [Vector2d] [default=Vector2d(0.01, DegToRad(1.0))]: Step sizes (in meters and in radians) used to determine whether a straight path or a rotation is valid regarding obstacles on the way.
- obstacle_names [std::vector<std::string>] [default=]: Name of the obstacles to avoid in the Atlas. This list is read at start; if it changes, this codelet need to be restarted.
isaac.otg5.RotateTranslateRotateStateMachine¶
Description
Using a simple state machine, this codelet produces a stateful delta signal based on a robot/goal pose delta signal received from an external component. The delta signal is preprocessed with regard to multiple motion stages. Here is a description of the logic flow for the motion stages: 1. Stop the robot if it is not stationary. 2. Rotate to point to the target location. 3. Drive straight to the target location. 4. Rotate in place at target location to meet the goal pose.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- delta_in [CompositeProto]: Delta between current goal and robot poses received (dx, dy, da)
- odometry [Odometry2Proto]: Estimated robot pose and velocity.
Outgoing messages
- delta_out [CompositeProto]: Delta to send out to OTG5 (dx, da) after processed by state machine
Parameters
- thresholds_stationary [Vector2d] [default=Vector2d(0.025, DegToRad(5.0))]: Thresholds on speed to determine if the robot is stationary (positional in m/s, and angular in rad/s).
- thresholds_direction_angle [Vector2d] [default=Vector2d(DegToRad(2.0), DegToRad(15.0))]: Two thresholds affecting when the state machine transitions due to change in robot angle with respect to the target angle. In the beginning, the robot will rotate in place until it points to the target. Once the first threshold in this parameter is met, the robot will start driving straight. See codelet comment for a description of the state machine. The second threshold is used to decide when the angle towards the goal is large enough to restart the state machine while linearly moving towards the target. Both values are in radians.
- thresholds_translational_distance [Vector2d] [default=Vector2d(0.15, 0.3)]: Two thresholds affecting when the state machine transitions due to change in relative distance to the goal position. Once the distance to target in x direction of the robot is smaller than the first threshold value in this parameter, the robot will start rotating to match the goal angle. See codelet comment for a description of the state machine. The second threshold is used to decide when the translational distance to the target is large enough to restart the state machine while rotating the robot. Both values are in meters.
- drive_backward [bool] [default=false]: If true, the robot will travel backward in the linear drive state.
- rotate_in_translation_state [bool] [default=true]: Denotes whether during the straight translation phase minor angle errors should be corrected automatically. Disabling this is useful for situations in which no angular velocities are allowed.
isaac.path_planner.EndEffectorGlobalPlanner¶
Description
Converts end effector cartesian pose to joint angles using inverse kinematic. The kinematic tree model is provided by a map::KinematicTree component. The target cartesian pose can be either read from the pose tree or from a CompositeProto message containing position and rotation entities for the end effector link configured in this codelet. The codelet can be configured to report success, when the current end effector pose calculated using current joint angles and forward kinematic is within tolerance to the cartesian target.
This codelet only computes new joint angle target if the cartesian pose target is changed, otherwise it publishes the previous calculated joint angle target. This avoids unnecessarily replanning that causes jerky motion due to IK solver outputting slightly different joint angle solutions due to different initial joint angles or numerical accuracies.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- end_effector_target [CompositeProto]: The desired end effector pose in composite message
- joint_state [CompositeProto]: Current state as a starting position for IK
Outgoing messages
- joint_target [CompositeProto]: The corresponding joint angle as goal
Parameters
- kinematic_tree [string] [default=]: Node name containing the manipulation::RobotModel component
- end_effector_name [string] [default=]: Frame of the manipulator base
- use_pose_tree [bool] [default=false]: If true, the end effector target comes from pose tree. Otherwise it comes from message
- target_pose [string] [default=]: Name of the target pose on pose tree if use_pose_tree is true
- root_frame [string] [default=]: Name of the root frame of the kinematic tree on pose tree if use_pose_tree is true
- report_success [bool] [default=false]: If true, report success if the state is within tolerance to target
- p_tolerance [double] [default=0.02]: Translation tolerance (in meter) for success
- q_tolerance [double] [default=0.05]: Rotation angle tolerance (in radian) for success
- number_attempts [int] [default=10]: Number of attempts we make to find a IK solution from a random state.
- ik_gradient_tolerance [double] [default=1.0e-6]: Tolerance on the gradient magnitude before we stop the gradient descent of the IK.
- ik_maximum_iterations [int] [default=1000]: Maximum number of iterations used in the gradient descent of the IK.
- ik_line_search_maximum_iterations [int] [default=30]: Maximum number of iterations used in the line search of the IK.
isaac.path_planner.GlobalPlanSmoother¶
Description
Creates a valid smooth global plan based on a given rough plan
A graph-based planning algorithm on a 3-dimensional state space (position + rotation) in a large domain often produces rough non-optimal plans with unnecessary corners and detours. This component takes such a rough path and smoothes it into a more optimal but still valid plan. The smooth plan can then for example be used as the input of a trajectory planner.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- rough_plan [Plan2Proto]: A global plan which is potentially not smooth
Outgoing messages
- smooth_plan [Plan2Proto]: A valid smooth global plan computed based on the input global plan
Parameters
- robot_model [string] [default=”navigation.shared_robot_model/SphericalRobotShapeComponent”]: Name of the robot model node
- obstacle_names [std::vector<std::string>] [default=std::vector<std::string>({“map/isaac.navigation.DistanceMap”, “map/restricted_area”, “global_plan_local_map”})]: List of obstacles to use for the planning. The ObstacleAtlas is queried.
- backward_shortcut [bool] [default=false]: Whether we allow to shortcut moving backward
- distance_between_waypoints [double] [default=0.25]: The target distance between waypoints
- number_shortcut_iterations [int] [default=1000]: How many iterations we perform each tick to attempt to shortcut
- optimized_length [double] [default=50.0]: How much of the path are we optimizing: only the first X meters will be optimized
- target_clearance [double] [default=0.25]: Target clearance from the obstacles. If a waypoint is closer than this distance we try to move it in the normal direction of the path to reach the target clearance.
- maintain_distance_factor [double] [default=0.9]: When shortcutting, how much closer are we allowed to get to the obstacle. A value of zero means we can shortcut as much as we want as long as the path is valid, while a value of 1 means we can shortcut as long as either the start or end of the path is the closest to the obstacles.
isaac.path_planner.GlobalPlanner¶
Description
Global planner, take a target destination and outputs a global plan from current position to the target. This produces a rough plan that should be fed to some optimizer (such as GlobalPlannerSmoother).
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
- previous_path [Plan2Proto]: The previous plan the robot is following, if this plan is still valid, then the global planner will just output it back, if not, then the global plan will generate a new plan.
Outgoing messages
- plan [Plan2Proto]: The computed global plan
Parameters
- graph_initialization_steps [int] [default=20000]: How many random samples to use while pre-computing the graph.
- graph_in_tick_steps [int] [default=0]: How many random samples to use during each tick to increase the graph size.
- graph_max_steps [int] [default=5000]: How many random samples to use when no valid path exist.
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
- static_obstacle_names [std::vector<std::string>] [default=std::vector<std::string>({“map/isaac.navigation.DistanceMap”, “map/restricted_area”})]: Name of the static obstacles. First one needs to be the one related to the global map. Note: these obstacles are assumed to be constant, if they change the planner needs to be stopped and restarted.
- dynamic_obstacle_names [std::vector<std::string>] [default={“global_plan_local_map”}]: Name of the dynamic obstacles. (Can be changed live)
- graph_file_in [string] [default=]: Path to a file containing the graph to load.
- graph_file_out [string] [default=”/tmp/graph.json”]: Path to a file where to save the file at the end
- model_error_margin [double] [default=0.05]: How close to obstacle the robot can be (in meters).
- model_max_translation_distance [double] [default=1.0]: Maximum distance between two points to be connected (in meters). A shorter distance produces a denser graph. In general a value in the order of the average distance of any point to the closest obstacle is recommended.
- model_max_rotation_distance [double] [default=TwoPi<double>]: Maximum rotation between two points to be connected (in meters). A shorter distance produces a denser graph.
- model_backward_path_penalty [double] [default=10.0]: The penality when moving backward
- model_invalid_path_penalty [double] [default=100.0]: The penality when moving into a dynamic obstacle.
- max_colliding_lookup [double] [default=0.5]: How much distance into obstacle do we tolerate for the starting position and the target.
- check_direct_path [bool] [default=true]: Whether we can connect directly the start and end position or if we should always use the graph to do the planning.
- world_dimensions [geometry::RectangleD] [default=]: Dimensions of the world. Random position will be sampled in this area. If not set, then it will be automatically computed using the obstacle map
isaac.path_planner.PlanCorrectionValidation¶
Description
PlanCorrectionValidation is a codelet used to check if the new plan is valid. Currently it is used to prevent the robot from replanning in some situations: If the robot is inside some area (provided using a PolygonMapLayer, see parameters polygon_layer_name and polygon_names below), then replanning is not allowed. However if the target destination changes, then replanning is allowed. Note: this codelet assumes the target is the last position on the plan.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_plan [Plan2Proto]: The latest suggested plan, it will be forwarded to the output
Outgoing messages
- output_plan [Plan2Proto]: Forward the latest input_plan if replanning is allowed (either the target has changed or the robot is outside the polygons provided by the PolygonMapLayer).
Parameters
- polygon_layer_name [string] [default=]: Name of the component that contains the PolygonMapLayer used to get the list of polygons;
- polygon_names [std::vector<std::string>] [default=]: List of the names of the polygon in the PolygonMapLayer to be used to compute the cost. If it is not provided all the polygons will be used.
- robot_frame [string] [default=”robot”]: The name of the frame used for planning.
isaac.path_planner.Pose2DirectedGraphLoader¶
Description
Helper component to laod a Pose2DirectedGraph from a file. This component should be added to the same node as Pose2GraphPlanner, it will be used to load the needed information for planning. It loads the graph over multiple ticks to prevent the Codelet to block all the compute resources and to allow the application to be gracefully terminated.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- graph_filename [string] [default=”/tmp/pose2_grid_graph.capnp.bin”]: Path to a file containing the graph to load.
- bucket_size [double] [default=0.5]: The dimensions of the bucket we store the nodes in. Once we get a new target/position, we will look into this bucket and the adjacent bucket to try to find a node.
isaac.path_planner.Pose2DirectedGraphPlanner¶
Description
Helper component to compute a path using Pose2DirectedGraph. It keeps track of a the list of blocked edge and then plan between a pair of nodes. Once a plan is found, it checks the path is valid; if not, it updates the list of current blocked edges and tries again.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- replanning_attempts [int] [default=10]: This parameter controls how many times do we try to replan in case of a colision before giving up and returning the current plan.
- validity_length_change [double] [default=15.0]: This parameter controls how far ahead we check if the path is valid. In case a path is invalid (in collision with an obstacle), then we might attempt to replan several time based on the parameter replanning_attempts.
- blocked_edge_timeout [double] [default=10.0]: When we mark an edge as blocked (because it’s colliding with the static obstacles), we will need to check again in the future whether or not the edge is still blocked. This parameter controls how long we wait after marking an edge blocked before we check it again. A value of 0 means we will check all the blocked edges every tick.
- threshold_until_edge_blocked [double] [default=0.0]: The threshold time (in seconds) until an edge is actually considered blocked.
isaac.path_planner.Pose2GraphPlanner¶
Description
Global planner This codelet takes a target destination from the goal channel, and outputs a plan from the current position of the robot to the target. To find a path, it uses a predefined directed graph and use the DirectedShortestPath gem to find the path. If no path exists, it will output a path to safely stop the robot. It is assumed that the graph already contain all the information about the world, the robot shape and the environment is only used to find which node are reachable from the current robot position and which node can reach the target. No other check will be performed and this task will be let to the local planner. There is currently no replanning around obstacles.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- waypoints [Plan2Proto]: The list of waypoints to drive through. Once we reach the final waypoint the path will stop.
Outgoing messages
- plan [Plan2Proto]: The computed global plan that goes from the current world_T_robot position to the latest target receive. If no path exist it will provide a plan to stop the robot.
Parameters
- robot_model [string] [default=]: Name of the robot model node. If not specified, this codelet will try to find one in the application. If more than one exist it will fail.
- obstacle_names [std::vector<std::string>] [default=std::vector<std::string>({“global_plan_local_map”})]: Name of the static obstacles. First one needs to be the one related to the global map. Note: these obstacles are assumed to be constant, if they change the planner needs to be stopped and restarted.
- model_max_translation_distance [double] [default=1.5]: Maximum translation distance between two states to be considered directly connected.
- model_max_rotation_distance [double] [default=DegToRad(45.0)]: Maximum rotation between two states to be considered directly connected.
- model_error_margin [double] [default=0.1]: The smallest distance (in meters) allowed between the robot and obstacles in the scene
- keep_last_valid_plan [bool] [default=true]: If this flag is true, in case we fail to find a path at a given tick but we had a valid path in the past (and the target has not changed), then we will keep the previous path, otherwise we will output an empty path.
- clear_cached_path [bool] [default=true]: If true, we will discard the cached point between pair of waypoints once the target change. If false, we will keep the cached path, it can speed up the planning if the list of used waypoints is small, but it requires much memory and can grow quickly if the waypoints used are always different.
isaac.path_planner.Pose2GridGraphBuilder¶
Description
Pose2GridGraphBuilder is a codelet that helps generate a dense directed graph for navigation that is meant to be used with the DirectedGraphGlobalPlanner planner. It uses the map and other obstacle available as well as the robot shape to determine a dense set of valid position and how we can safely navigate from one to another. It creates edges between them with custom weight depending on the distance to obstacle and the orientation in order to help the DirectedGraphGlobalPlanner to find the “optimal” path.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- robot_model [string] [default=]: Name of the robot model node. If not specified, this codelet will try to find one in the application. If more than one exist it will fail.
- graph_filename [string] [default=”/tmp/pose2_grid_graph.capnp.bin”]: Path to a file containing the graph to load.
- flatmap_cost_name [string] [default=]: Name of the component that contains the FlatmapCost used to build the graph.
- region_of_interest [geometry::RectangleD] [default=]: The region of interest we will sample our positions.
- rotation_cost_factor [double] [default=1.0]: How much to penalize rotation compare to translation.
- curve_cost_factor [double] [default=0.5]: How much to penalize rotation when taking a curving edge. It should be less than rotation_cost_factor or those edges won’t be considered.
- position_sample_distance [double] [default=0.2]: The distance in X/Y direction we sample to generate the node of the graph.
- neighbours_roi_radius [int] [default=5]: For a given angles_generation value, we look at every node in the range [-angles_generation, angles_generation] x [-angles_generation, angles_generation] that are directly reachable from the position (0, 0). This is all the orientation we will be considering The number of angles used for a given value follow the sequence: 1 -> 8, 2 -> 16, 3 -> 32, 4 -> 48, 5 -> 80, 6 -> 96, etc (https://oeis.org/A137243)
- stop_application_uppon_success [bool] [default=false]: Stop the application up success.
- use_curve_edge [bool] [default=false]: Whether curving edges are used. A curving edge is an edge an edge that represents a motion that both translates and rotates.
- rotation_use_node_weight [bool] [default=true]: Whether the cost of the rotation is weighted using the position. If true, then the path will favor rotating in low cast area. If false, then rotating has a flat cost no matter where it occurs.
isaac.path_planner.Pose2WaypointFollower¶
Description
Pose2WaypointFollower takes a list of waypoints as input and forwards the list of waypoints the robot as yet to reach. It tracks the robot position and when the robot gets close enough to a waypoint (using the tolerance parameter) it removes the waypoint from the list and publishes the remaining waypoints. The final waypoint is never removed.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- waypoints_in [Plan2Proto]: The list of waypoints the robot should visit.
Outgoing messages
- waypoints_out [Plan2Proto]: The list of waypoints the robot has yet to visit.
Parameters
- tolerance [Vector2d] [default=Vector2d(1.0, DegToRad(180.0))]: Tolerance to consider whether a robot has robot reached a waypoint. The first parameter is the distance tolerance, while the second is the angle tolerance.
- robot_frame [string] [default=”robot”]: Name of the frame associated to the robot.
isaac.perception.BirdViewProjection¶
Description
Unprojects a given 2-channel image from perspective to bird’s eye view. The codelet takes the following inputs - * ImageProto: 2-channel image which is to be unprojected to bird’s eye view * LatticeProto: Represents the gridmap information corresponding to the unprojected 2-channel
image. The information from this message is used to derive some of the parameters required for unprojection, namely cell size and gridmap dimensions.
- CameraIntrinsicsProto: Required to obtain the pinhole model which corresponds to the input 2-channel image.
The codelet outputs an unprojected image along with a lattice proto message with the same parameter values as the input lattice proto, but with the timestamp of the input image. These messages are intended to be used by the local map as a representation of occupancy.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- intrinsics [CameraIntrinsicsProto]: Input proto message containing the pinhole model
- input_image [ImageProto]: Incoming 2-channel float image
- gridmap_lattice [LatticeProto]: Input lattice proto. This contains relevant information about the gridmap corresponding to the input 2-channel image.
Outgoing messages
- bird_view_image [ImageProto]: Output bird view image
- synced_gridmap_lattice [LatticeProto]: Output lattice proto, published with the same timestamp as the bird view image. This contains the same parameter values as the input, but is mainly published so that the timestamps of the 2-channel image and the corresponding lattice match.
Parameters
- camera_frame [string] [default=”camera”]: The name of the camera frame
isaac.perception.ColorSpaceConverter¶
Description
Performs the conversion of an input image into a desired color space and publishes the result. If the input image already has the desired color space, it’s published to output as-is. The input image dimensions are preserved after the color space conversion. All operations are performed on CPU. Currently, RGB(A) to grayscale is the only supported color space conversion.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ImageProto]: input image to be converted into the desired color space.
Outgoing messages
- output_image [ImageProto]: output image that is published after the color space conversion.
Parameters
- color_weights [Vector3d] [default=Vector3d(0.21, 0.72, 0.07)]: Defines the weight of the red, green and blue components for RGB to Greyscale conversion. All weights should sum to 1.0 Valid range for each weight is between 0.0 and 1.0
- input_color_space [ColorSpace] [default=ColorSpace::kRGB]: Defines the color space of an input image. Currently, only grayscale, rgb and rgba input images are supported.
- output_color_space [ColorSpace] [default=ColorSpace::kGRAYSCALE]: Defines the desired color space of the output image. Currently, this parameter is a dummy as RGB(A) to grayscale is the only supported color space conversion.
isaac.perception.CropAndDownsample¶
Description
Codelet to crop and downsample the input image. The input image is first cropped to the desired region of interest and then resized to the desired output dimensions. All operations are performed using cpu functionality.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ImageProto]: Input image
- input_intrinsics [CameraIntrinsicsProto]: Input intrinsics including pinhole and distortion parameters
Outgoing messages
- output_image [ImageProto]: Cropped and resized output image
- output_intrinsics [CameraIntrinsicsProto]: Output intrinsics including pinhole and distortion parameters
Parameters
- crop_mode [CropMode] [default=CropMode::kManual]: Parameter which determines if we should auto-crop the input image. If set to “AutomaticCrop”, the codelet would compute the maximum possible crop which matches the aspect ratio of the desired output specified by downsample. If the parameters crop_start and crop_size are explicitly set by the user, they would be reset by the codelet. If set to “ManualCrop”, the codelet uses the exact crop start and crop size as specified by the user.
- crop_start [Vector2i] [default=]: Top left corner (row, col) for crop
- crop_size [Vector2i] [default=]: Target dimensions (rows, cols) for crop.
- downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.
isaac.perception.CropAndDownsampleCuda¶
Description
Codelet to crop and downsample the input image. The input image is first cropped to the desired region of interest and then resized to the desired output dimensions. All operations are performed using cuda based functionality.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ImageProto]: Input image
- input_intrinsics [CameraIntrinsicsProto]: Input intrinsics including pinhole and distortion parameters
Outgoing messages
- output_image [ImageProto]: Cropped and resized output image
- output_intrinsics [CameraIntrinsicsProto]: Output intrinsics including pinhole and distortion parameters
Parameters
- crop_start [Vector2i] [default=]: Top left corner (row, col) for crop
- crop_size [Vector2i] [default=]: Target dimensions (rows, cols) for crop.
- downsample_size [Vector2i] [default=]: Target dimensions (rows, cols) for downsample after crop.
- process_intrinsics [bool] [default=true]: Apply crop operation to the intrinsics channel.
isaac.perception.FreespaceToFlatscan¶
Description
Converts a freespace polyline to flatscan representation.
Freespace is represented by “freespace boundary” projected to ground as a 2D polyline. From this representation we build an equivalent flatscan message to enable integration with evidence grid map (EGM) environment/obstacle model.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- freespace [Polyline2dProto]: Incoming 2D freespace polyline
Outgoing messages
- flatscan [FlatscanProto]: Outgoing “flat” range scan
- Parameters
- (none)
isaac.perception.ImageWarp¶
Description
Warps an image from one model to another model. Currently two input and two output models are supported. Input models: perspective and fisheye lens. Output models: perspective and equirectangular. A common use case for this codelet is correction of camera lens distortion effects.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_image [ImageProto]: The input image
- input_intrinsics [CameraIntrinsicsProto]: The parameters include focal length, principal point, radial and tangential distortion parameters, and projection type (perspective or fisheye).
Outgoing messages
- output_image [ImageProto]: The output image
- output_intrinsics [CameraIntrinsicsProto]: The output parameters are set accordingly to the requested target camera model. For perspective they match the source model with no distortion. For an equirectengular model they are computed based on the choice of the pixel_density parameter.
Parameters
- down_scale_factor [int] [default=4]: Scaling of the displayed images in Sight. down_scale_factor the ratio of the size of the source image to the displayed image.
- gpu_id [int] [default=0]: The GPU device to be used for Warp360 CUDA operations. The default value of 0 suffices for cases where there is only one GPU, and is a good default when there is more than 1 GPU.
- output_model [ImageWarpOutputModel] [default=ImageWarpOutputModel::kPerspective]: The desired output camera model
- pixel_density [double] [default=]: For certain projections this parameter can be used to control the size of the output image. For an equirectangular projection the output size will be the field of view angles of the camera multiplied by this constant. If this parameter is not set the output size will be identical to the input size which can lead to undesired cropping or black bars.
- background_color [Vector3ub] [default=]: Some projections will not fully cover the output image space. In that case the background is black by default, but can be changed to the given color if desired.
isaac.perception.PointCloudAccumulator¶
Description
Accumulates point clouds into a larger point cloud. This can be used, for example, to accumulate small, partial/incomplete point clouds produced by a LIDAR into a usable point cloud. For better performance, the point count to accumulate should be a multiple of the incoming sample point cloud size, if possible, or at least far greater. This accumulator also can break down a large message into smaller ones.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- point_cloud [PointCloudProto]: Incoming proto messages used to subscribe to small, point cloud data samples to accumulate.
Outgoing messages
- accumulated_point_cloud [PointCloudProto]: Outgoing proto messages used to publish the accumulated point cloud.
Parameters
- point_count [int] [default=10000]: Number of accumulated points before publishing the point cloud. This parameter can be configured and changed at runtime. The point cloud is published when this number of accumulated points is reached.
isaac.perception.RangeScanFlattening¶
Description
Flattens a 3D range scan into a 2D range scan.
We assume that a range scan is made up out of vertical “slices” of beams which are rotated around the lidar at specific azimuth angles. For each azimuth angle all beams of the vertical slice are analysed and compared to a 2.5D world model to compute a single distance value for that azimuth angle. The pair of azimuth angle and distance are published as a “flat” range scan.
The 2.5D world model assumes that every location in the X/Y plane is either blocked or free. To compute that we assume a critical height slice relative to the lidar defined my a minimum and maximum height. If any return beam of the vertical slice hits an obstacle in that height slice the flat scan will report a hit. In addition to the height interval we also allow for a fudge on the pitch angle of the lidar which will be an additional rejection criteria. Essentially every beam return has to be inside the height slice not only for the beam angle alpha, but for all angles in the interval [alpha - fudge | alpha + fudge].
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scan [RangeScanProto]: Incoming 3D range scan
Outgoing messages
- flatscan [FlatscanProto]: Outgoing “flat” range scan
Parameters
- use_target_pitch [bool] [default=false]: Enables usage of target pitch parameter
- target_pitch [double] [default=]: If this value is set only beams with this pitch angle will be used; otherwise all beams of a vertical beam slice will be used.
- height_min [double] [default=0.0]: Minimum relative height for accepting a return as a collision.
- height_max [double] [default=1.5]: Maximum relative height for accepting a return as a collision.
- pitch_fudge [double] [default=0.005]: Inaccuracy of vertical beam angle which can be used to compensate small inaccuracies of the lidar inclination angle.
isaac.perception.RangeToPointCloud¶
Description
The RangeToPointCloud class converts a range scan into a point cloud.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scan [RangeScanProto]: The range scan which is to be converted to a point cloud
Outgoing messages
- cloud [PointCloudProto]: The point cloud computed from the range scan
Parameters
- min_fov [double] [default=DegToRad(360.0)]: Number of rays to accumulate before sending out the message (in addition to min_count)
- min_count [int] [default=360]: Minimum number of points before sending a point cloud (in addition to min_fov)
- enable_visualization [bool] [default=false]: If set to true the point cloud is visualized with Sight
- sensor_frame [string] [default=”lidar”]: If set to true the point cloud is visualized with Sight
isaac.perception.ScanAccumulator¶
Description
Accumulates slices of a range scans into a full range scan. This can for example be use to accumulate the small slices produced by a rotating lidar into a full 360 degree range scan.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- scan [RangeScanProto]: Proto used to subscribe to partial scan lidar data
Outgoing messages
- fullscan [RangeScanProto]: Proto used to publish full scan lidar data
Parameters
- min_fov [double] [default=DegToRad(360.0)]: Minimum FOV before sending out the message (in addition to min_slice_count)
- min_slice_count [int] [default=1800]: Number of slices to accumulate before sending out the message (in addition to min_fov)
- clock_wise_rotation [bool] [default=true]: Turning direction of the LIDAR
isaac.perception.StereoRectification¶
Description
Apply lens geometry undistortion and pose alignment to a pair of input images, publishing the rectified results. We accept only images in Image3ub format.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_left_image [ImageProto]: Left camera input image buffer and intrinsic parameters.
- input_right_image [ImageProto]: Right camera input image buffer and intrinsic parameters.
- input_left_pose [Pose3dProto]: Left camera input extrinsic parameters, relative to the stereo camera coordinate system, This is not used now because ZedCam doesn’t provide it, so it defaults to the identity, i.e. the left camera is the stereo camera coordinate system.
- input_right_pose [Pose3dProto]: Right camera input extrinsic parameters, relative to the stereo camera coordinate system, which is typically chosen to be at one camera or the mount point. Since the left_input_pose is not currently used, the right extrinsic parameters are with respect to the left. If not provided, it is assumed to be the identity. Note that the camera coordinate systems have the X-axis pointing to the right, the Y-axis pointing downward, and the Z-axis pointing in the direction of view. Rotations are typically close to the identity.
- input_left_intrinsics [CameraIntrinsicsProto]: Left input intrinsics including pinhole and distortion parameters
- input_right_intrinsics [CameraIntrinsicsProto]: Right input intrinsics including pinhole and distortion parameters
Outgoing messages
- output_left_image [ImageProto]: Left output image buffer and intrinsic parameters.
- output_right_image [ImageProto]: Right output image buffer and intrinsic parameters.
- output_left_pose [Pose3dProto]: Left output extrinsic parameters, relative to the stereo camera coordinate system. Note that this will in general be different than the input pose, even if the input was the identity.
- output_right_pose [Pose3dProto]: Right output extrinsic parameters, relative to the stereo camera coordinate system.
- output_left_intrinsics [CameraIntrinsicsProto]: Left output intrinsics including pinhole and distortion parameters
- output_right_intrinsics [CameraIntrinsicsProto]: Right output intrinsics including pinhole and distortion parameters
Parameters
- use_left_pose [bool] [default=false]: Some cameras supply only one pose, that of right w.r.t. left. If a pose is supplied as well, set use_left_pose to true.
- down_scale_factor [int] [default=4]: Scaling factor of the image visualization in Sight.
- gpu_id [int] [default=0]: The GPU device to be used for Warp360 CUDA operations.
isaac.planner.DifferentialBaseModel¶
Description
Holder of common parameters describing the differential base (two independent controllable wheels defined by the wheel radius and distance between wheels).
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- base_length [double] [default=0.63]: The distance between the two wheels
- wheel_radius [double] [default=0.2405]: The radius of the wheels
isaac.planner.DifferentialBaseVelocityIntegrator¶
Description
Publishes a plan by interpolating and integrating velocities. Reads sparse target linear and angular velocities from input message. Uses linear interpolation to compute velocities between time stamps. Uses Riemann sum to approximately integrate for pose.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- target_velocities [CompositeProto]: Contains time series of target values with (linear_speed, angular_speed, time_seconds).
Outgoing messages
- plan [CompositeProto]: Contains time series of poses and velocities to form the trajectory.
Parameters
- delta_time [double] [default=0.02]: Difference between time stamps in outgoing plan message.
isaac.planner.MultiJointPlanner¶
Description
Computes a local plan for a list of joints in a kinematic tree object given their current position and speed and their target position and (optionally) speed. The local plan is computed using an implementation of MultiJointPlannerInterface.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- target_state [CompositeProto]: Receivers The target joint position, and optionally speed
- starting_state [CompositeProto]: The initial joint position and speed
Outgoing messages
- plan [CompositeProto]: Publisher The local plan (trajectory) for the joints’ position, speed and acceleration
Parameters
- kinematic_tree_node_name [string] [default=]: Node name containing the map::KinematicTree component
- multi_joint_planner_node_name [string] [default=]: Node name containing the MultiJointPlannerInterface component
isaac.planner.SphericalRobotShapeComponent¶
Description
Model of a robot composed of a union of circles. The distance function is approximated by the function -ln(sum(exp(-alpha * dist_i))/alpha where alpha = ln(1 + #circles) * smooth_minimum controls how well the min function is approximated. If the real distance is D, then we have: D - 1/smooth_minimum <= distance <= D;
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- circles [std::vector<geometry::CircleD>] [default={}]: List of circles that compose the robot
- smooth_minimum [double] [default=20.0]: Parameters to control how well the minimum function is approximated. The error will be in the range: D-1/smooth_minimum <= distance <= D where D = the real distance
isaac.planner_cost.AdditionBuilder¶
Description
Implementation of PlannerCostBuilder that uses the Addition evaluation. This component takes a list of component names as input parameter and adds them to the list of parameters to be added by the cost function. This is done once in the build() function and won’t be changed when update is called.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- component_names [std::vector<std::string>] [default=]: List of names of the components which (a PlannerCostBuilder implementation) to be added by the cost function. Changes to this parameter won’t be visible until build() is called again.
isaac.planner_cost.BoundedQuadraticCostBuilder¶
Description
Implementation of PlannerCostBuilder that uses the BoundedQuadraticCost evaluation. The cost is: maximum * (1.0 - sigma^2 / (sigma^2 + (scaler * (state-target)).squaredNorm()). This component has 4 parameters that can be used to control the final cost: - maximum that control the factor applied to the final cost - target used to compute the cost (see formula above) - sigma controls the spread where the function behave as a quadratic function - scaler used to control how much each dimension contribute to the cost (it’s optional). Those values can be changed live, and cost function will be updated in the next call to update
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- scaler [VectorXd] [default=]: Scaler apply on each dimension when computed (start - target). This parameter is optional, by default each dimension will contribute equally.
- target [VectorXd] [default=]: Target state, the error is computed as a difference from the state to the target.
- sigma [double] [default=1.1]: controls the spread where the function behave as a quadratic function.
- maximum [double] [default=1.0]: Set the maximum value this function can take
- force_positive_definite_hessian [bool] [default=false]: Whether or not we force the hessian to be positive definite. For optimization problem it might be important for the hessian to be positive definite. If this parameter is set to true, we will approximate the hessian by a form which is positive definite.
isaac.planner_cost.CirclesUnionSmoothDistanceBuilder¶
Description
Implementation of PlannerCostBuilder that use the CirclesUnionSmoothDistance evaluation. This component loads the SphericalRobotShapeComponent to get the list of circles that represents the robot and loads a distance function from another component (specified by the config parameter component_name).
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- component_name [string] [default=]: Names of the component Implementating a PlannerCostBuilder to be used as distance function
- indices_mapping [Vector3i] [default=Vector3i(0, 1, 2)]: The indices of the [pos_x, pos_y, heading] inside state.
isaac.planner_cost.DistanceQuadraticCostBuilder¶
Description
Implementation of PlannerCostBuilder that use the DistanceQuadraticCost evaluation. It loads the distance function implementation from a component specified from the config parameter component_name. The list of cost function is provided using the config parameter costs. See the documentation of the parameter for more details.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- component_name [string] [default=]: Names of the component Implementating a PlannerCostBuilder to be used as distance function
- indices_mapping [Vector4i] [default=Vector4i(0, 1, 2, 3)]: The indices of the [pos_x, pos_y, heading, linear_speed] inside state.
- costs [std::vector<Vector3d>] [default=std::vector<Vector3d>({Vector3d(1.0, 0.25, 1.0), Vector3d(500.0, 0.1, 0.0)})]: List of cost to add based on the distance function.
- Each cost is of the following form:
- 0.5 * gain * min(0, distance - speed * speed_gradient - target)^2
where Vector3d is used to represent the triplet: [gain, target, speed_gradient]
isaac.planner_cost.DotProductCostBuilder¶
Description
Implementation of PlannerCostBuilder that uses the DotProductCost evaluation. This component has 2 parameters that can be used to control the final cost: - offset: this a constant offset added to the final cost. - dot_vector: this is the vector used to compute the dot product with the state. Those values can be changed live, and cost function will be updated in the next call to update
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- offset [double] [default=0.0]: Constant offset to be added to the final cost.
- dot_vector [VectorXd] [default=]: Vector used to compute the dot product with the state.
isaac.planner_cost.ObstacleDistanceBuilder¶
Description
Implementation of PlannerCostBuilder. This component takes an obstacle name from the config and build a ObstacleDistance function. It also needs to know the expected frame of the state in order to prepare the transformation from the state frame to the obstacle frame.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- obstacle_name [string] [default=]: Name of the obstacle in the Atlas
- reference_frame [string] [default=”odom”]: Name of frame used by the cost function
isaac.planner_cost.PolygonDistanceQuadraticCostBuilder¶
Description
Implementation of PlannerCostBuilder that uses the PolygonDistanceQuadraticCost evaluation. It takes a polygon as a config parameter and will produces a smooth quadratic error based on the distance to the polygon. If the polygon is provided in clockwise order, then the cost will penalize being outside the polygon, otherwise it will penalize being inside.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- gain [double] [default=1.0]: Gain of the quadratic cost: 0.5 * gain * min(0, offset + distance)^2
- offset [double] [default=0.0]: offset of the quadratic cost: 0.5 * gain * min(0, offset + distance)^2
- indices [Vector2i] [default=Vector2i(0, 1)]: Index for [pos_x, pos_y] inside state
- update_polygon [bool] [default=true]: Parameter to track if the polygon needs to be updated.
- polygon [geometry::Polyline2d] [default={}]: Takes list of 2d points the represents the polygon. If the vertices are provided in clock-wise order, then it penalizes for being outside the polygon.
isaac.planner_cost.PolygonSpeedLimitBuilder¶
Description
Implementation of PlannerCostBuilder that uses the PolygonSpeedLimit evaluation. It loads a list of polygons from PolygonMapLayer (either by name or all of them). and penalizes if the robot exceeds the speed limit inside one of the polygons. The cost used will quickly approach a quadratic cost but has a smooth border that can be configured using the sigma parameter. The final cost is: 0.5 * gain * max(0, speed - speed_limit)^2 * distance^2 / (distance^2 + sigma^2). where distance is the distance inside one of the polygons and speed_limit and sigma are defined by the config parameters.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- gain [double] [default=1.0]: Gain of the cost (see above for the exact formula)
- speed_limit [double] [default=0.5]: Maximum allowed speed. If the robot is below this limit, the cost is always 0, otherwise if the robot is inside one of the polygons, it is penalized with an almost quadratic cost (see the formula above).
- indices [Vector3i] [default=Vector3i(0, 1, 3)]: Index for [pos_x, pos_y, speed] inside state
- sigma [double] [default=0.1]: Controls the spread where the function behave as a quadratic function. (see the formula above). Sigma needs to be different from 0.0.
- force_positive_definite_hessian [bool] [default=true]: Whether or not we force the hessian to be positive definite. For optimization problem it might be important for the hessian to be positive definite. If this parameter is set to true, we will approximate the hessian by a form which is positive definite.
- polygon_layer_name [string] [default=]: Name of the component that contains the PolygonMapLayer used to get the list of polygons;
- polygon_names [std::vector<std::string>] [default=]: List of the names of the polygon in the PolygonMapLayer to be used to compute the cost. If it is not provided all the polygons will be used.
- polygon_frame [string] [default=”world”]: The name of the frame in which the polygon’s points are provided.
- planning_frame [string] [default=”odom”]: The name of the frame used for planning.
isaac.planner_cost.PolylineDistanceQuadraticCostBuilder¶
Description
Implementation of PlannerCostBuilder that uses the PolylineDistanceQuadraticCost evaluation. It takes a path to follow as a config parameter and will produces a smooth quadratic error based on the distance to the polyline. If speed_reward is not 0.0, an additional linear reward will be added on the speed to have the robot moving in the direction of the path.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- gain [double] [default=]: Gain of the quadratic cost: 0.5 * gain * distance^2
- indices [Vector3i] [default=Vector3i(0, 1, 3)]: Index for [pos_x, pos_y, speed] inside state.
- update_polyline [bool] [default=true]: Parameter to track if the polyline needs to be updated.
- speed_reward [double] [default=0.0]: Speed reward to make the robot move in the direction of the polyline. (Positive reward means moving in the direction of the trajectory while a negative reward will reward moving in the opposite direction).
- min_distance_for_reward [double] [default=0.0]: The speed reward will only be applied if the distance to the end target is more than this distance (this effectively set the speed_reward to 0.0 when the robot is getting too close)
- polyline [geometry::Polyline2d] [default={}]: Takes list of 2d points as the polyline
- maximum_number_points [int] [default=20]: The maximum number of waypoints to be forwarded to the planner_cost. Computing the smooth distance to a polyline is expensive, reducing the number of waypoints can significantly improve the performance.
- minimum_distance_waypoint [double] [default=1.0]: If two waypoints are closer than this distance, the later will be ignored (unless it is the last one). Computing the distance to a polyline is expensive, and waypoint close to each other do not bring much information.
- polyline_frame [string] [default=”odom”]: The name of the frame in which the polyline’s points are provided.
- planning_frame [string] [default=”odom”]: The name of the frame used for planning.
- robot_frame [string] [default=”robot”]: The name of the robot frame.
isaac.planner_cost.RangeConstraintsCostBuilder¶
Description
Implementation of PlannerCostBuilder that uses the RangeConstraintsCost evaluation. This component has 3 parameters that can be used to control the final cost: - gains that control the factor applied to each dimension. - min_value/max_value that describe the range where the cost is 0, while outside this range a
quadratic cost is applied.Those values can be changed live, and cost function will be updated in the next call to update
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- gains [VectorXd] [default=]: Gain associated to each dimensions
- min_value [VectorXd] [default=]: Minimum value for each dimension, if the state is below this value, a quadratic cost is applied
- max_value [VectorXd] [default=]: Maximum value for each dimension, if the state is above this value, a quadratic cost is applied
isaac.planner_cost.SmoothLinearTargetCostBuilder¶
Description
Implementation of PlannerCostBuilder that uses the SmoothLinearTargetCost evaluation. The cost is: gain * sqrt(epsilon^2 + (scaler * (state-target)).squaredNorm()). This component has 4 parameters that can be used to control the final cost: - gain that control the factor applied to the final cost - target used to compute the cost (see formula above) - epsilon used to control how close to the absolute we compute the gain - scaler used to control how much each dimension contribute to the cost (it’s optional). Those values can be changed live, and cost function will be updated in the next call to update
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- scaler [VectorXd] [default=]: Scaler apply on each dimension when computed (start - target). This parameter is optional, by default each dimension will contribute equally.
- target [VectorXd] [default=]: Target state, the error is computed as a difference from the state to the target.
- epsilon [double] [default=0.1]: Controls the approximation of the absolute value: sqrt(epsilon^2 + d^2) ~ |d|. When epsilon goes to zero, the formula converge to the absolute value.
- gain [double] [default=1.0]: Gain applied to the final cost
- force_positive_definite_hessian [bool] [default=false]: Whether or not we force the hessian to be positive definite. For optimization problem it might be important for the hessian to be positive definite. If this parameter is set to true, we will approximate the hessian by a form which is positive definite.
isaac.planner_cost.SmoothMinimumBuilder¶
Description
Implementation of PlannerCostBuilder that uses the SmoothMinimum evaluation. This component takes a list of component names as input parameter and will compute a smooth approximation of the minimum. The list of components to load is read only inside the build() function. No update will be visible until the PlannerCost is build again.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- component_names [std::vector<std::string>] [default=]: List of names of the components (a PlannerCostBuilder implementation) to be used by the cost function. Changes to this parameter won’t be visible until build() is called again.
- minimum_smoothing [double] [default=20.0]: Smoothing factor used inside this function. This controls how close to the minimum the function will be. The expected difference between the minimum and the evaluation of cost_ will be lower than ln(builders_.size()) / minimum_smoothing. If minimum_smoothing is negative, then this function be changed into a smooth maximum. This value needs to be different from 0.0.
isaac.pwm.PwmController¶
Description
Interface for a PCA9685 PWM Controller device This device is used to send PWM signals to peripherals
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- set_duty_cycle [PwmChannelSetDutyCycleProto]: PwmChannelSetDutyCycleProto is used to set a duty cycle for a PWM channel note: setting a PWM value for a channel automatically enables that channel duty_cycle as a percentage, from 0.00 to 1.00
- set_pulse_length [PwmChannelSetPulseLengthProto]: PwmChannelSetPulseLengthProto is used to set a pulse length for a PWM channel pulse_length as a percentage, from 0.00 to 1.00 of the cycle
- Outgoing messages
- (none)
Parameters
- i2c_device_num [int] [default=0]: I2C device ID; matches /dev/i2c-X
- pwm_frequency_in_hertz [int] [default=50]: Defines the frequency at which the PWM outputs modulate, in hertz 50Hz is common for servos
isaac.rgbd_processing.DepthEdges¶
Description
Computes edges on a depth image. By default a GPU-accelerated method is used
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Depth image used for point and normal computation
Outgoing messages
- edges [ImageProto]: Pixel edge likelihood stored as unit FP32
Parameters
- edge_jump_threshold [double] [default=0.06]: Threshold in meters after which a jump in distance between two pixels is considered an edge.
- min_depth [double] [default=]: Depth smaller or equal to the given value will be marked as edge
- max_depth [double] [default=]: Depth values larger or equal to the given value will be marked as edge.
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
- reduce_scale [int] [default=4]: Reduction factor for image sent to sight, values greater than one will shrink the image by that factor.
isaac.rgbd_processing.DepthImageFlattening¶
Description
The DepthImageFlattening class flattens a depth image into a 2D range scan.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Input depth image
- intrinsics [CameraIntrinsicsProto]: Intrinsics of depth camera
Outgoing messages
- flatscan [FlatscanProto]: Output range scan
Parameters
- camera_frame [string] [default=”camera”]: The name of the camera coordinate frame
- ground_frame [string] [default=”ground”]: The name of the ground coordinate frame
- fov [double] [default=DegToRad(90.0)]: The field of view to use for the result range scan
- sector_delta [double] [default=DegToRad(0.5)]: Angular resolution of the result range scan
- min_distance [double] [default=0.2]: Minimum distance for the result range scan
- max_distance [double] [default=5.0]: Maximum distance for the result range scan
- range_delta [double] [default=0.10]: Range resolution of the result range scan
- cell_blocked_threshold [int] [default=10]: A sector in the result range scan is marked as blocked after the given number of points.
- height_min [double] [default=0.20]: Maximum height in ground coordinates in which a point is considered to be an obstacle
- height_max [double] [default=1.00]: Minimum height in ground coordinates in which a point is considered to be an obstacle
- skip_row [int] [default=0]: Number of pixels in row that are skipped while parsing the image
- skip_column [int] [default=0]: Number of pixels in column that are skipped while parsing the image
isaac.rgbd_processing.DepthImageToPointCloud¶
Description
Create a point cloud from a depth image. Every pixel is “unprojected” based on its depth and the camera model. The point cloud is transformed into the desired target frame using the given transformation cloud_T_camera.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Input depth image
- intrinsics [CameraIntrinsicsProto]: Input depth or color image intrinsics
- color [ImageProto]: Input color image to color points (optional)
Outgoing messages
- cloud [PointCloudProto]: The computed point cloud
Parameters
- use_color [bool] [default=false]: If this is enabled a color image will be used to produce a colored point cloud. This can only be changed at program start.
isaac.rgbd_processing.DepthNormals¶
Description
Computes normals for a depth image based on computed points and respecting edges. This component uses GPU acceleration by default.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- points [ImageProto]: Pixel points stored as a 3-channel FP32 image
- edges [ImageProto]: Pixel edge likelihood stored as unit FP32
Outgoing messages
- normals [ImageProto]: Pixel normals stored as a 3-channel FP32 image
Parameters
- normals_smooth_radius [int] [default=7]: Radius over which normals are smoothed
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
- reduce_scale [int] [default=4]: Reduction factor for image sent to sight, values greater than one will shrink the image by that factor.
isaac.rgbd_processing.DepthPoints¶
Description
Computes 3D points from a depth image and stores them as an Image3f. By default a GPU-accelerated implementation is used.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Depth image used for point and normal computation
- intrinsics [CameraIntrinsicsProto]: Intrinsics of depth camera
Outgoing messages
- points [ImageProto]: Pixel points stored as a 3-channel FP32 image
Parameters
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
- reduce_scale [int] [default=4]: Reduction factor for image sent to sight, values greater than one will shrink the image by that factor.
isaac.rgbd_processing.FreespaceFromDepth¶
Description
The FreespaceFromDepth class flattens a depth image into a 2D range scan.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Input image use to compute the range scan
- intrinsics [CameraIntrinsicsProto]: Intrinsics of depth camera
Outgoing messages
- flatscan [FlatscanProto]: Output the freespace as a range scan that can be used for example to produce a local map for navigation
Parameters
- last_range_cell_additional_contribution [double] [default=2.5]: In order to favor the last cell in case there is no obstacle, we arbitrarily increase the value by this factor scaled by the average occupancy.
- edge_distance_cost [double] [default=0.5]: Factor to compute the cost of an edge (multiplied by the distance) Reducing this value might increase processing time.
- max_edge_cost [double] [default=1.0]: Cap on the maximum cost of an edge (Reducing this value might speed up the processing time.)
- max_contribution_after_wall [double] [default=2.5]: Once we hit a wall, we cap the value of a cell at: max_contribution_after_wall * average_weight
- wall_threshold [double] [default=5.0]: The minimum value needed for a cell to be considered as a wall (as a factor of the average value.)
- fov [double] [default=DegToRad(90.0)]: The field of view to use for the result range scan
- num_sectors [int] [default=180]: Angular resolution of the result range scan
- range_delta [double] [default=0.1]: Range resolution of the result range scan
- height_min [double] [default=-1.00]: Maximum height in ground coordinates in which a point is considered valid
- height_max [double] [default=2.00]: Minimum height in ground coordinates in which a point is considered valid
- max_distance [double] [default=20.0]: Max range for the extraction.
- reduce_scale [int] [default=2]: Reduction factor for image. Values greater than one shrink the image by that amount
- integrate_temporal_information [bool] [default=false]: Reduction factor for image. values greater than one shrink the image by that amount
- use_predicted_height [bool] [default=false]: Whether to use the predicted height (from measurement) or 0 when rendering the freespace
- camera_name [string] [default=]: Name of the camera used to get the camera position in the world
isaac.rl.DollyDockingAuxDecoder¶
Description
Generates an auxiliary tensor as described in https://arxiv.org/abs/1606.01540 . The auxiliary tensor is used to inform the codelets later in the pipeline about important events that impact the flow of Gym and simulation. For the dolly navigation task, the auxiliary tensor is of dimension 7 and data is arranged as follows: Index 0 : Flag to signify if a collision has occurred in simulation Index 1 : DollyDockingDeath uses this index to store a value indicating whether the agent died due to old age Index 2 : Ground truth pose of dolly relative to robot (robot_T_dolly) : x-cordinate (forward) Index 3 : Ground truth pose of dolly relative to robot (robot_T_dolly) : y-cordinate (sideways) Index 4 : Ground truth pose of dolly relative to robot (robot_T_dolly) : heading Index 5 : Difference between dolly and robot translations (dolly_T_robot) : x-cordinate Index 6 : Difference between dolly and robot translations (dolly_T_robot) : y-cordinate
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- body_collision [CollisionProto]: Receives a collision message whenever a robot collision occurs in simulation
- scene_pose [RigidBody3GroupProto]: Message containing the true positions of all the bodies from simulation
Outgoing messages
- agent_aux [TensorProto]: Send the composed auxiliary tensor to Gym
- Parameters
- (none)
isaac.rl.DollyDockingBirth¶
Description
Publishes reset messages to simulation when an agent dies in the dolly docking scene. StateMachineGymFlow expects one clone of this codelet for each agent in simulation and resets the scene for a single agent whenever Gym calls its spawn function. This codelet is also responsible for randomizing the various objects in the scene. The tx hook is created dynamically and the output channel of this codelet must be connected to the Teleport script for the particular agent scene inside the simulation.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- names [std::vector<std::string>] [default=std::vector<std::string>({“robot”, “dolly”, “left_wall”, “right_wall”, “rear_wall”, “left_block”, “right_block”, “rear_block”})]: Name of objects in the scene that can be reset The order of the strings must follow the order in the reset_state tensor
- agents_per_row [int] [default=3]: Number of agents in each row of the scene
- use_grid_displacement [bool] [default=true]: If a translation offset needs to be added to each scene object’s coordinates in order to spawn scenes along a grid
- start_coordinate [Vector2d] [default=Vector2d(0.0, -2.0)]: Start spawning robots from the given x and y coordinate
- target_separation [Vector2d] [default=Vector2d(4.0, 0.0)]: Distance between the target(dolly) and the robot along x and y coordinates
- obstacle_separation [Vector2d] [default=Vector2d(0.0, 5.0)]: Spawn position of the obstacles (walls, blocks, etc) before randomization is applied along x and y coordinates
- agent_spawn_randomization [Vector3d] [default=Vector3d(0.0, 0.0, 0.2)]: Randomize the pose of the robot from its center in x, y and angle coordinates This parameter lists the maximum displacement allowed in each of those coordinates
- target_spawn_randomization [Vector3d] [default=Vector3d(0.0, 0.0, 0.2)]: Randomize the pose of the target(dolly) from its center in x, y and angle coordinates This parameter lists the maximum displacement allowed in each of those coordinates
- obstacle_spawn_randomization [Vector3d] [default=Vector3d(2.0, 2.0, 0.2)]: Randomize the pose of the walls and blocks from their centers in x, y and angle coordinates This parameter lists the maximum displacement allowed in each of those coordinates
- obstacle_scale_randomization [Vector3d] [default=Vector3d(0.0, 2.0, 0.0)]: Randomize the scale of the blocks from their original sizes This parameter lists the maximum displacement allowed in the elements of the scale vector
- dividing_space [Vector2d] [default=Vector2d(30.0, 35.0)]: Distance between two robot-dolly setups in simulation along x and y coordinate
- odometry_codelet_prefix [string] [default=”odometry_”]: DollyDockingBirth expects that each odometry node hosts a single odometry codelet. It assumes the nodes are named in the format odometry_1, odometry_2. …odometry_n. This param sets the prefix for the node name.
- pid_codelet_prefix [string] [default=”pid_”]: DollyDockingBirth expects that each controller to hosts a single controller codelet. It assumes the nodes are named in the format pid_1, pid_2. …pid_n. This param sets the prefix for the node name.
isaac.rl.DollyDockingDeath¶
Description
DollyDockingDeath implements the Death interface. StateMachineGymFlow expects one clone of this codelet for each agent in simulation and evaluates the state of a single agent at every step. It decides whether an agent needs to be killed and respawned. The conditions for the robot dying are : 1. If the robot collides with another object in the scene 2. If the age of the robot is greater than the maximum allowable age 3. If the robot has veered far off from its goal (example, itturned 180 degrees)
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- maximum_age [int] [default=500]: Stores the maximum number of timesteps the robot is allowed to run
- x_allowance [Vector2f] [default=Vector2f(-1.0f, 6.0f)]: The maximum allowable distance that the agent can move before getting killed along negative and positive x direction respectively
- y_allowance [float] [default=3.5f]: The maximum allowable distance that the agent can move before getting killed along y-direction (on either side)
- angle_allowance [float] [default=1.3f]: The maximum allowable radians that the agent can rotate before getting killed on either direction of the axis
isaac.rl.DollyDockingReward¶
Description
DollyDockingReward implements the Reward interface. StateMachineGymFlow expects one clone of this codelet for each agent in simulation and computes the reward for the robot docking under a dolly task. Rewards are gained by the agent if the action that was performed helped the agent get closer to the center of the target dolly.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- collision_penalty [float] [default=-50.0]: Penalty awarded when the robot collides with the dolly or walls in the scene
- success_reward [float] [default=100.0]: Final reward received on reaching the target pose within tolerance limits
- reward_clip_range [Vector2f] [default=(Vector2f{-20.0, 20.0})]: Trim the reward values if they exceed this range on both sides of number scale
- tolerance [Vector2f] [default=(Vector2f{0.11, 0.085})]: Deltas along the x and y directions from the center of the dolly which are considered to be successful docking end poses
- bias [Vector3f] [default=(Vector3f{1.0, 7.5, 10.0})]: Coeffecients for the x-cordinate, y-cordinate and the angle (radians) in the cost equation
- ideal_docking_pose [Vector2f] [default=(Vector2f{0.335, 0.0})]: Perfect docking coordinate of the robot with respect to the center of the dolly in (x,y)
isaac.rl.DollyDockingStateDecoder¶
Description
Forms the state tensor which is the input to the policy neural network. For the dolly navigation task, the input is composed of 1. State of the robot derived from the odometry message (odom_T_robot.x, odom_T_robot.y, odom_T_robot.angle, linear speed, angular speed, linear acceleration, angular acceleration) 2. Observation map for the environment. The expected size of the map is 128 * 128. The observation map is flattened into a one-dimensional tensor of size 16384. The tensor published by the codelet consists of 16391 floats.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- base_state [Odometry2Proto]: Receive odometry data for the robot that forms a part of the input to the policy
- local_map [ImageProto]: Receive the observation map for the robot that forms a part of the input to the policy
Outgoing messages
- agent_state [TensorProto]: Send the composed agent state to Gym in the form of a one-dimensional tensor by combining the odometry data and the observation map data together
Parameters
- use_pose_tree [bool] [default=false]: Specify if the latest pose of the robot should be queried from the pose tree
- robot_frame [string] [default=”robot”]: Name of robot’s frame
- target_frame [string] [default=”dolly”]: Name of the target frame. Used to convert odom message to robot_T_target information.
isaac.rl.DollyDockingStateNoiser¶
Description
DollyDockingStateNoiser implements the StateRefiner interface. StateMachineGymFlow expects one clone of this codelet for each agent in simulation. It is responsible for storing a noisy target pose to the dolly (odom_T_target) and post-processes the coordinates from odom_T_robot to robot_T_target.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- target_pose_noise [Vector3f] [default=Vector3f(0.0, 0.0, 0.0)]: Noise to add to the target pose coordinate along th x, y, and angle coordinate during training of the reinforcement learning policy
isaac.rl.StateMachineGymFlow¶
Description
The Gym Codelet provides a state machine to run reinforcement learning algorithms. For n agents, it expects n Reward codelets, n Death codelets n Birth codelets and optionally n StateRefiner codelets be present in the same node. In the beginning of the state machine, the state machine synchronizes its clock with the simulation clock. Once the clocks are synchronized, it sets the state of all the agents to dead in order to initialize the scene and begins the following cycle 1. Call the spawn() function of the Birth component for each dead agent in simulation 2. Waits till the aggregated state tensor is received from the aggregator along with
auxiliary tensors if present.
- Pass the newly received state tensor to Death::is_dead() function that evaluates if the agent is dead for being in an invalid state.
- Compute rewards for each agent based on the latest transition
- Publish the state of the robots along with rewards, death flags and last action performed as aggregate tensors
- Wait till the neural network performs a forward pass by taking in the state tensor to output the action tensor
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- state [TensorProto]: Receive vectorized states of agents after performing action in simulation This channel is typically connected to the TensorAggregator codelet receiving two dimensional tensor from the State Decoders
- aux [TensorProto]: Receive auxillary / misc state information from simulator This channel is typically connected to the TensorAggregator codelet receiving two dimensional tensor of size (num_agents, aux_dim) from the Aux Decoders
- action [TensorProto]: Receive vectorized actions for agents with tensor of size (num_agents, action_dim) This channel is typically connected to the neural network inference codelet
Outgoing messages
- command [TensorProto]: Forwards data received from the rx_action() channel after progressing the state machine
- transition_state [TensorProto]: Send latest agent states for the recently concluded transition This channel is typically connected to the TemporalBatching codelet
- transition_action [TensorProto]: Send actions that were last performed in the recently concluded transition This channel is typically connected to the TemporalBatching codelet
- transition_reward [TensorProto]: Send the rewards obtained by the agents in the recently concluded transition This channel is typically connected to the TemporalBatching codelet
- transition_dead [TensorProto]: Send the dead or alive vector for the agents in the recently concluded transition This channel is typically connected to the TemporalBatching codelet
- transition_aux [TensorProto]: Send the aux tensor for the agents in the recently concluded transition This channel is typically connected to the TemporalBatching codelet
Parameters
- num_agents [int] [default=1]: Specify number of agents in simulation
- state_dimension [int] [default=1]: Specify dimensions of the state space for each agent
- action_dimension [int] [default=1]: Specify dimensions of the action space for each agent
- aux_dimension [int] [default=1]: Specify dimensions of auxillary data per agent
- episode_end_flag_index [int] [default=-1]: Specify the index in the auxiliary tensor that contains a flag indicating if an episode has ended. This flag is optional but setting it ensures that when the agents are reset at the end of the episode, they are not penalized in the reward unfairly.
- delay_sending [float] [default=2.5]: Delay in seconds before sending first message
- reaction_time [float] [default=0.1]: Delay in seconds before simulation reacts
isaac.rl.TemporalBatching¶
Description
The Temporal Batching codelet performs two distinct functions. First, it sends the state tensor to the neural network policy for inference. To do so, it aggregates the state tensors received over the ‘step’ channel over multiple past ticks into a flattened two dimensional tensor (one row for each agent). This flattened tensor is created by copying the most recent state tensor that was received followed by the state tensor recieved in the timestep before it, and so on. During the first timestep in the life of the agent, or when enough historical tensors are unavailable, the first tensor is repeatedly copied to fill the space in the flattened tensor for that agent. This allows us to create inputs to the neural network composed of past states and actions, not just the current immediate state and action. Second, the codelet is also responsible for populating the sample accumulator. To do so, it deaggreates the two dimensional tensors received from Gym into separate one dimensional tensors and publishes the states, actions, rewards, and dead flag for each agent individually to the Sample Accumulator. Data sent simultaneously to the sample accumulator is treated as a single tuple. Hence, at every step, the number of tuples published from the Temporal Batching to the Sample Accumulator is equal to the number of agents in simulation.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- step [TensorProto]: Receive 2 dimensional states, one row for each agent transition
- action [TensorProto]: Receive 2 dimensional actions, one row for each agent action
- reward [TensorProto]: Receive rewards, one cell for each agent
- aux [TensorProto]: Receive 2 dimensional aux, one row for each agent transition
- dead [TensorProto]: Receive 1 dimensional tensor, one cell for each agent indicating if the agent is dead or alive
Outgoing messages
- temporal_tensor [TensorProto]: Publishes the two dimensional state tensor for inference to the neural network. The published state tensor is formed by linearly combining the tensors received on the step channel over multiple past timesteps
- state_samples [TensorProto]: Send the current state of the agents to the sample accumulator
- last_state_samples [TensorProto]: Send the state of the agents in the last tick to the sample accumulator
- reward_samples [TensorProto]: Send the reward obtained by the agents in the current tick to the sample accumulator
- action_samples [TensorProto]: Send the actions performed by the agents in the current tick to the sample accumulator
- dead_samples [TensorProto]: Send the dead or alive status of the agents in the current tick to the sample accumulator
Parameters
- num_agents [int] [default=1]: Specify number of the agents in simulation
- look_back [int] [default=1]: Specify the numbers of past steps we should store in our history
- reset_interval [int] [default=-1]: Clear the history of all agents based on elapsed number of ticks irrespective of their age If -1, the agent history is only cleared at the death of the agent.
- is_connected_to_gym [bool] [default=true]: If the input to the Temporal Batcher is not from StateMachineGymFlow, then ignore all inputs other than the state. This might be useful when deploying to hardware where the StateMachineGymFlow is not needed.
- episode_end_flag_index [int] [default=-1]: The StateMachineGymFlow and SequentialGymFlow codelets in Isaac SDK publish a unified tuple (s,a,r,d,aux). This parameter contains the index of the flag in the aux tensor that signifies the end of the episode. When the episode ends and the agent is not dead, the past data still needs to be cleared. If -1, it is assumed that there is no end of episode flag.
isaac.rl.TensorAggregator¶
Description
Combines an arbitrary number of one-dimensional tensors of the same size into a single two-dimensional tensor. Each message channel represents a single row in the output tensor. The row number of a received 1f tensor within the combined 2f tensor depends on the order in which the channels are connected to this codelet. The channel connected first to this codelet in the application json fills the first row in the combined tensor.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- aggregate_tensor [TensorProto]: Publish the aggregated two-dimensional tensor message
Parameters
- wait_till_all_dirty [bool] [default=true]: Block publication of aggregated message until a new tensor is received for each row If false, an aggregate two-dimensional tensor is published every tick with only those rows updated that had new tensor messages on their channels between the ticks.
isaac.rl.TensorDeaggregator¶
Description
Splits or deaggregates the received two-dimensional tensor into a set of one-dimensional tensors (one tensor per row). Each output message channel represents a single row in the input tensor. The row number of a published 1f tensor within the input 2f tensor depends on the order in which the output channels are connected from this codelet. The output channel connected first from this codelet in the application json will contain the first row extracted from the input tensor.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- aggregate_tensor [TensorProto]: Receives the two-dimensional tensor whose rows need to be published as one-dimensional tensors
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.rl.TensorToCompositeVelocityProfile¶
Description
Converts the tensor received from the neural network policy into a Composite. The six elements of the tensor form linear and angular velocity pairs for the next three timesteps. The codelet rescales the elements of the tensor to the desired range and appends a target timestamp to each row of the Composite. The target timestamp signifies the time by which the robot must achieve the controls predicted by the neural network.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- policy [TensorProto]: Receives the output of the neural network as a 1d tensor during training or when conducting inference without a frozen model.
Outgoing messages
- velocity_profile [CompositeProto]: Publishes the scaled values of the input tensor as a Composite with timestamps appended to each prediction, signifying velocity targets the low-level controller needs to achieve at those defined timesteps.
Parameters
- input_linear_velocity_range [Vector2f] [default=(Vector2f{-0.5, 0.5})]: Range of the values expected from the neural network for linear velocity indices
- input_angular_velocity_range [Vector2f] [default=(Vector2f{-0.5, 0.5})]: Range of the values expected from the neural network for angular velocity indices
- output_linear_velocity_range [Vector2f] [default=(Vector2f{0.0, 1.0})]: Rescale the received linear velocity indices to this range
- output_angular_velocity_range [Vector2f] [default=(Vector2f{-0.5, 0.5})]: Rescale the received angular velocity indices to this range
- timestamp_profile [Vector3f] [default=(Vector3f{0.1, 0.4, 0.8})]: Target timesteps to append to each of our three velocity pairs. These timesteps are appended to the tick time of the codelet and form the target timestamp.
isaac.rmpflow.MultiJointRmpPlanner¶
Description
Implementation of MultiJointPlannerInterface. Computes a local plan for a list of joints in a kinematic tree object given their current position and speed and their target position. The local plan is computed by integrating a Riemannian motion policy (RMP) in the RMPflow framework.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- num_integrator_substeps [int] [default=10]: Factor by which to divide “delta_time” to arrive at a smaller timestep used for rolling out the trajectory. For example, a delta_time of 0.1 and num_integrator_substeps of 10 would result in an effective substep size of 0.01 seconds.
- max_trajectory_length [double] [default=4.0]: Maximum trajectory length, in seconds
- position_tolerance [double] [default=1e-4]: Minimum tolerance desired. The target is considered to have been reached once |final_position - target_position| < position_tolerance, where |x| denotes the L2 norm.
- rmpflow_config_file [string] [default=]: Path to Lula RMPflow configuration file
- robot_description_file [string] [default=]: Path to Lula robot description file for the robot
- robot_urdf [string] [default=]: Path to URDF for the robot, used by Lula for kinematics
- end_effector_frame [string] [default=]: End effector control frame for the robot
isaac.ros_bridge.CameraIntrinsicsToRos¶
Description
This codelet receives CameraIntrinsicsProto data within Isaac application and publishes it to ROS.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- frame_id [string] [default=”camera”]: This param will populate frame_id in ROS CameraInfo message. Details at http://docs.ros.org/api/sensor_msgs/html/msg/CameraInfo.html
isaac.ros_bridge.FlatscanToRos¶
Description
This codelet receives flatscan data within Isaac application and publishes it to ROS.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- frame_id [string] [default=”base_scan”]: Name of the frame to be used in outgoing message
isaac.ros_bridge.GoalToRos¶
Description
This codelet receives goal as message within Isaac application and publishes it to ROS as a message. If goal feedback is needed, use similar codelet named “GoalToRosAction” instead.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- goal_frame [string] [default=”map”]: Frame of the goal in outgoing message
- robot_frame [string] [default=”base_link”]: Frame of the robot in ROS. Used to stop the robot if needed.
isaac.ros_bridge.GoalToRosAction¶
Description
This codelet receives goal as message within Isaac application and publishes it to ROS as an action. Unlike the similar codelet named “GoalToRos”, GoalToRosAction then publishes Goal2FeedbackProto.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination
- odometry [Odometry2Proto]: The odometry information with current speed
Outgoing messages
- feedback [Goal2FeedbackProto]: Feedback regarding the goal
Parameters
- action_name [string] [default=”move_base”]: ROS namespace where action will be communicated to
- goal_frame_ros [string] [default=”map”]: Frame of the goal in outgoing ROS message
- robot_frame_ros [string] [default=”base_link”]: Frame of the robot in ROS. Used to stop the robot if needed.
- robot_frame_isaac [string] [default=”robot”]: Frame of the robot in Isaac. Used in publishing feedback pose.
- stationary_speed_thresholds [Vector2d] [default=Vector2d(0.025, DegToRad(5.0))]: Threshold on speed to determine if the robot is stationary (positional and angular)
isaac.ros_bridge.ImageToRos¶
Description
This codelet receives ImageProto data within Isaac application and publishes it to ROS.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- frame_id [string] [default=”camera”]: This param will populate frame_id in ROS image message. Details at http://docs.ros.org/api/sensor_msgs/html/msg/Image.html
isaac.ros_bridge.OdometryToRos¶
Description
This codelet receives odometry data within Isaac application and publishes it to ROS. This codelet assumes ROS subscriber is respecting https://www.ros.org/reps/rep-0103.html, i.e., x forward y left z up
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- pose_frame [string] [default=”odom”]: Frame of the pose in outgoing message
- twist_frame [string] [default=”base_footprint”]: Frame of the twist in outgoing message
isaac.ros_bridge.PosesToRos¶
Description
For a list of pose mappings, reads pose from Isaac Pose Tree and writes it to ROS tf2. Note that frame definitions between Isaac and ROS may not match, e.g., “map” frame of Isaac and “map” frame of ROS are typically different.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- ros_node [string] [default=”ros_node”]: Name of the Isaac node with RosNode component Needs to be set before the application starts.
- pose_mappings [std::vector<IsaacRosPoseMapping>] [default={}]: A json object from configuration containing the poses to read from Isaac Pose Tree and write to
ROS. Left hand side (lhs_frame) corresponds to target_frame in tf2 notation. Right hand side (rhs_frame) corresponds to source_frame in tf2 notation. Layout:
- [
- {
- {
- “isaac_pose”: {
- “lhs_frame”: “odom”, “rhs_frame”: “robot”
- },
- “ros_pose”: {
- “lhs_frame”: “odom”, “rhs_frame”: “base_footprint”
}
}
}
]
isaac.ros_bridge.RosNode¶
Description
This codelet initializes a ROS node and ticks until roscore is up. Every Isaac application with ROS bridge needs to have one and only one node with a single component of this type. This codelet also provides 1. ros::NodeHandle, which can be used to initialize ROS message subscribers and publishers, 2. checkBeforeInterface() which should be used before carrying ROS operations.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- ros_node_name [string] [default=”isaac_bridge”]: Node name that will appear in ROS node diagram
isaac.ros_bridge.RosToCameraIntrinsics¶
Description
ROS’s sensor_msgs/CameraInfo.msg message defines meta information for a camera. Details:http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html. This ROS bridge converter codelet helps convert ROS’s sensor_msgs/CameraInfo.msg into Isaac’s PinholeProto Type. This codelet can be used, for example, to use Isaac’s GPU accelerated perception algorithms on ROS images.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.ros_bridge.RosToDifferentialBaseCommand¶
Description
This codelet receives twist message from ROS and publishes it as a velocity command for a differential base robot.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.ros_bridge.RosToImage¶
Description
ROS’s sensor_msgs/Image.msg message contains uncompressed image. Details:http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Image.html. This ROS bridge converter codelet helps convert ROS’s sensor_msgs/Image.msg into Isaac’s ImageProto Type. This codelet can be used, for example, to use run GPU accelerated perception algorithms of Isaac with ROS images.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.ros_bridge.RosToPoses¶
Description
For a list of pose mappings, reads pose from ROS tf2 and writes it to Isaac Pose Tree. Note that frame definitions between Isaac and ROS may not match, e.g., “map” frame of Isaac and “map” frame of ROS are typically different.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- ros_node [string] [default=”ros_node”]: Name of the Isaac node with RosNode component Needs to be set before the application starts.
- pose_mappings [std::vector<IsaacRosPoseMapping>] [default={}]: A json object from configuration containing the poses to read from Isaac Pose Tree and write to
ROS. Left hand side (lhs_frame) corresponds to target_frame in tf2 notation. Right hand side (rhs_frame) corresponds to source_frame in tf2 notation. Layout:
- [
- {
- {
- “isaac_pose”: {
- “lhs_frame”: “odom”, “rhs_frame”: “robot”
- },
- “ros_pose”: {
- “lhs_frame”: “odom”, “rhs_frame”: “base_footprint”
}
}
}
]
isaac.safety.CarterSimBridge¶
Description
Simulation bridge that translates RMP segway commands to differential base commands in order to interface with the Carter platform in Omniverse ISAAC Sim
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segway_command [CommandProto]: RMP segway command
Outgoing messages
- base_command [StateProto]: Differential base command
- Parameters
- (none)
isaac.safety.CarterTeleop¶
Description
Reads keyboard input and publishes Carter commands
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- base_command [StateProto]: Differential base command
- segway_command [CommandProto]: RMP segway command
- estop_command [EstopProto]: E-stop command
Parameters
- device_directory [string] [default=”/dev/input/by-path”]: Directory containing keyboard devices
- input_gain [float] [default=0.5]: Scales user input before mapping to control command
isaac.safety.EstopController¶
Description
Controller that provides virtual access to actuators with emergency stop support
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- control_input [CommandProto]: Requested control command
- estop_command [EstopProto]: E-stop override signal
Outgoing messages
- control_output [CommandProto]: Actual control command
- fault [FaultProto]: Fault signal TODO - update with actual fault codes (https://jirasw.nvidia.com/browse/ISAFETY-115)
Parameters
- estop_sequence_single [JsonCommandSequence] [default={}]: Configured E-stop sequence to execute once
- estop_sequence_loop [JsonCommandSequence] [default={}]: Configured E-stop sequence to continuously loop
isaac.safety.FaultMonitor¶
Description
Monitor that observes and forwards faults from connected components. Used to monitor faults in a safety partition.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input [FaultProto]: Incoming faults indicating the status of connected components
Outgoing messages
- output [FaultProto]: Outgoing fault indicating number of connected components that are faulty
- fault [FaultProto]: Outgoing fault indicating status of unit
Parameters
- table_size [int] [default=256]: Size of fault table
isaac.safety.TestSink¶
Description
Message sink for testing E-Stop Controller
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- control_output [CommandProto]: Test output connections
- fault [FaultProto]:
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.safety.TestSource¶
Description
Message source for testing E-Stop Controller
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- control_input [CommandProto]: Test input connections
- estop_command [EstopProto]:
Parameters
- control_input [JsonCommandSequence] [default={}]: Configuration for test inputs
- estop_command [EstopList] [default={}]:
isaac.scenario_engine.PolylineTransform¶
Description
Moves an object along a polyline, which is an ordered list of poses or a trajectory. The codelet picks the target poses in the order of specification and interpolates between the current and target poses based on the time passed. Optionally, the polyline can be traversed in a continuous loop or just once. The pose to be transformed to at each step is written to the pose tree.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- rhs_frame [string] [default=]: Name of the reference frame for the right hand side of the pose. This would refer to the name of the object transformed along the polyline.
- lhs_frame [string] [default=]: Name of the reference frame of the left side of the pose
- polyline [std::vector<Pose3d>] [default=]: Polyline on which the object has to move, including the start pose of the object. If the object has to transformed in a continuous loop, the start pose of the object should feature at the beginning and end of this vector. This is important since the time needed to move from the last pose of the polyline to the start pose needs to be known, or computed.
- polyline_timestamps [std::vector<double>] [default=]: Timestamps for each pose in the polyline.
- translation_threshold [double] [default=0.001]: Threshold to determine if the translation between two poses are close to each other
- rotation_threshold [double] [default=0.001]: Threshold to determine if the rotation between two poses are close to each other
- loop_polyline [bool] [default=false]: If set to true, the polyline should be traversed in a continuous loop.
isaac.scenario_engine.PoseTreeEdgeToRigidBodyGroup¶
Description
Converts PoseTreeEdge messages to RigidBody3Group proto messages. This can be used, for example, when the scenario generation API output needs to be converted to rigid body messages in order to for them to be published in simulation.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- pose_tree_edge [PoseTreeEdgeProto]: Incoming message containing the 3D pose between 2 frames. In the case of scenario generation APIs, the pose would be between the simulated world and the actor.
Outgoing messages
- rigid_body_group [RigidBody3GroupProto]: Outgoing rigid body group message containing the body pose.
- Parameters
- (none)
isaac.scenario_engine.SpawnActors¶
Description
Spawns actors in specified poses, or destroys actors in the scene. The actors to be spawned are specified by the prefab and an assigned name along with a pose to be spawned. Actors which are to be destroyed are identified by their names. Optionally, the actors that were spawned are destroyed upon codelet stop.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- actors [ActorGroupProto]: Outgoing messages to spawn or destroy actors in simulation
Parameters
- spawn_actor_names [std::vector<std::string>] [default=]: List of actor names to be spawned
- spawn_actor_prefabs [std::vector<std::string>] [default=]: List of actor prefabs to be spawned, in the order of the specified names
- spawn_actor_poses [std::vector<Pose3d>] [default=]: List of poses that the actors are to be spawned with, in the order of the specified names
- destroy_actor_names [std::vector<std::string>] [default=]: List of actor names to be destroyed
- destroy_on_stop [bool] [default=true]: If set to true, the spawned actors are destroyed when the codelet is stopped
isaac.sight.AliceSight¶
Description
Interface for sight. Provide a default implementation which does nothing.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.sight.SightWidget¶
Description
A component which can be used to create a widget for sight.
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- base_frame [string] [default=]: If specified the window will use this frame as the base frame (the frame used as reference when rendering).
- static_frame [string] [default=]: If specified the window will use this frame as the static frame (the frame used to synchronize the channel with the base channel).
- title [string] [default=]: The caption of the widget. If not specified the component name will be used
- type [Type] [default=]: The type of the widget (mandatory). Possible choices are: “2d”, “3d”, “plot”.
- dimensions [Vector2i] [default=]: The initial dimensions of the widget. If not specified sight will decide.
- markers [std::vector<std::string>] [default={}]: A list of pose markers to show
- channels [std::vector<Channel>] [default={}]: A list of channels to display on the sight widget. Channels have several parameters: * name: The name of the sight channel in the form: node_name/component_name/channel_name * active: If disabled the channel will not be drawn initially when the widget is created
- prepend_channel_name_with_app_name [bool] [default=true]: If enabled all channel names are prefixed with the app name.
- prepend_title_with_app_name [bool] [default=true]: If enabled the title of the widget will be prefixed with the app name.
- enabled [bool] [default=true]: If false, this SightWidget will not create a window in Sight
isaac.sight.WebsightServer¶
Description
The webSightServer class serves the frontend web visualization. Data is sent over a websocket defined by a predefined API.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- port [int] [default=3000]: Port for the communication between web server and Sight
- webroot [string] [default=”packages/sight/webroot”]: Path to the files needed for Sight
- assetroot [string] [default=”../isaac_assets”]: Path to assets used by the webpage like for example pictures or robot model.
- bandwidth [int] [default=10000000]: Bandwidth to limit the rate of data transfer
- latency [int] [default=5]: Change the latency sight messages are processed. Higher latency can help reduce the stress on the connection, however the message will be delayed before being sent.
- use_compression [bool] [default=true]: Whether to compress data for transfer
- use_cache [bool] [default=true]: Whether to cache the file we are loading from disk. If set to false then Websight server will read from disk everytime a request is made.
- ui_config [json] [default=(nlohmann::json{{“windows”, {}}})]: Configuration for User Interface (UI)
isaac.skeleton_pose_estimation.OpenPoseDecoder¶
Description
OpenPoseDecoder converts a tensor from OpenPose-type model into a list of Skeleton models Note: Because a modified OpenPose architecture is used, tensors are not compatible with the original paper.
OpenPose is a popular model architecture that allows 2D pose estimation of keypoints (or “parts”) of articulate and solid objects. Examples of such objects include humans, vehicles, animals, and robotic arms. Only a single type of object is normally supported by the model; however, multiple instances of the object are supported Note: OpenPose performs simultaneous detection and ‘skeleton model’ pose estimation of objects. In the following documentation, ‘objects’, ‘skeleton models’, and ‘skeletons’ may be used. For more information about the model, please refer to https://arxiv.org/pdf/1812.08008.pdf
OpenPoseDecoder takes in a multiple tensors from the Open Pose neural network. Specifically these tensors are used: Part Affinity Fields, Parts Gaussian Heatmaps, and Parts Gaussian Heatmaps MaxPool tensors. It uses Parts Gaussian Heatmaps and Parts Gaussian Heatmaps MaxPool to compute the PeakMap for detecting the potential key points for each object in the frame and outputs these keypoints as the vertex of a graph. The graph edges are made based on prior knowledge of the edges between object parts. It then uses the Part Affinity Fields tensor to make the graph weighted. The weighted graph contains all possible edges between candidates of two parts. Then a greedy algorithm specialized to the task is used to find the optimum edges based on the maximum score that can be obtained from the weights of the graph. It then refines the positions of final keypoints and publishes final graphs as a Skeleton2ListProto message.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- part_affinity_fields [TensorProto]: [0] : part_affinity_fields : PAFLayer = “lambda_2/conv2d_transpose”
- gaussian_heatmap [TensorProto]: [1] : gaussian_heatmap : GaussianHeatMapLayer = “lambda_3/tensBlur_depthwise_conv2d”
- maxpool_heatmap [TensorProto]: [2] : maxpool_heatmap : MaxPoolGHMLayer = “tensBlur/MaxPool”
Outgoing messages
- skeletons [Skeleton2ListProto]: A list of 2D pose estimations of skeleton models for detected objects (list of SkeletonProto). See SkeletonProto for more details.
Parameters
- label [string] [default=]: A string to initialize the ‘label’ field of the output SkeletonProto object. It should be set to match the type of object detected by the model (for example ‘human’).
- labels [std::vector<std::string>] [default=]: List of strings to use as detected joints labels. For example: [“Elbow”, “Wrist”, …] It is used to initialize the ‘label’ field of skeleton joints. Note, the order and size of this list of labels should match that of the gaussian_heatmap tensor (channels dimension).
- edges [std::vector<Vector2i>] [default=]: List of edges to detect (as edges of the skeleton model). Each edge is defined by a pair of indices into the labels array specified by the ‘labels’ parameter. Indices are zero-based. For example [[0, 1], [2, 3]] will define two edges with the first edge “Elbow” - “Wrist”. This list is configured at the training time of the model.
- edges_paf [std::vector<Vector2i>] [default=]: List of indices to channels of the part_affinity_fields tensor, to locate components of the parts affinity field. This list is ‘indexed by edge_id’ (so the order and size of this list should match that of the edges parameter. This list is configured at the model training time.
- threshold_heatmap [float] [default=]: Peak-map preprocessing threshold. Part-candidates below this threshold are discarded.
- threshold_edge_size [float] [default=]: PAF-candidate edge size. Connection-candidates below this threshold are discarded.
- threshold_edge_score [float] [default=]: PAF-candidate dot-product threshold. Connection-candidates below this threshold are discarded.
- threshold_edge_sampling_counter [int] [default=]: PAF-candidate counter threshold. Connection-candidates below this threshold are discarded. Number of times dot-product was larger than threshold_edge_score during edge_sampling_steps Note, it depends on edge_sampling_steps (should be smaller or equal to edge_sampling_steps).
- threshold_part_counter [int] [default=]: Final skeleton detection part counter threshold. Detections with fewer parts are discarded.
- threshold_object_score [float] [default=]: Final skeleton detection score threshold. Detections with lower threshold are discarded.
- threshold_split_score [float] [default=]: Final skeleton detection split threshold, objects with lower threshold are not merged.
- edge_sampling_steps [int] [default=]: Number of sampling steps to calculate line integral over the part affinity field. Note also: threshold_edge_sampling_counter.
- refine_parts_coordinates [bool] [default=]: Refine peaks of gaussian heatmap with “weighted coordinates” approach. The gaussian heatmap grid cells of adjacent to the initial peak are used to refine the peak position to get better estimates of parts coordinates. Note, the output of “refined parts coordinates” are floating point subpixel coordinates placed at “grid centers”, rather than integer rows and columns.
- output_scale [Vector2d] [default=]: Output scale for the decoded skeleton pose output. For example, this could be the image resolution (before downscaling to fit the network input tensor resolution). The format is [output_scale_rows, output_scale_cols]
isaac.skeleton_pose_estimation.SkeletonsPnP¶
Description
This codelet reads in list of list of skeleton detections with respect to the sensor frame refines the pose of skeleton detection using the Perspective-n-Point (PnP) solver on
- 2D projections of skeleton detections corners
- 3D prior poses of skeleton detections corners from CAD or measurements
Note: The pose tree should contain rigid transforms for skeletons_of_interest_T_skeleton_joint.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- intrinsics [CameraIntrinsicsProto]: Intrinsics of color camera containing the skeleton detections
- skeletons [Skeleton2ListProto]: Input list of skeleton detections with respect to the sensor frame.
Outgoing messages
- reprojected_skeletons [Skeleton2ListProto]: Output list of PnP constrained reprojections skeleton with respect to the sensor frame.
- detections [Detections3Proto]: Output list of objects with respect to the sensor frame, their class names and confidence.
Parameters
- skeletons_of_interest [string] [default=]: List of comma separated skeletons of interest (i.e. “dolly”). The pose tree should contain transforms for skeletons_of_interest_T_skeleton_joint.
- reprojection_error_threshold [double] [default=10.0]: Reprojection error threshold, pixels.
- Note:
- Skeletons with the reprojection error above set threshold will be filtered out.
- Confidence is set to:
- (reprojection_error_threshold - reprojection_error) / reprojection_error_threshold
- minimum_joints_threshold [int] [default=6]: Minimum number of joints threshold. NOTE: pose estimates for objects associated with fewer joints will be filtered out.
isaac.superpixels.RgbdSuperpixelCostMap¶
Description
Creates a cost map from superpixels. The cost map is a birdview around the robot where cells are marked with a cost depending on if they are free or blocked.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- superpixels [SuperpixelsProto]: Superpixels used to segment the image
- labels [SuperpixelLabelsProto]: Superpixels labels used to label pixels
Outgoing messages
- occupancy_map_lattice [LatticeProto]: Cost map computed from obstacle superpixel
- occupancy_map [ImageProto]: Cost map computed from obstacle superpixel
Parameters
- costmap_frame [string] [default=”costmap”]: The name of the costmap frame
- superpixels_frame [string] [default=”superpixels”]: The name of the superpixels frame
- clear_radius [int] [default=10]: A small rectangular area around the robot with this radius is always marked as free to prevent the robot from seeing itself.
- cell_size [double] [default=0.035]: The size of a cell in the costmap
- dimensions [Vector2i] [default=Vector2i(64, 64)]: The dimensions of the costmap
- relative_offset [Vector2d] [default=Vector2d(0.125, 0.5)]: The zero position of the costmap frame inside the costmap array
isaac.superpixels.RgbdSuperpixels¶
Description
Computes fast superpixel clustering for an RGB-D image. This algorithms uses a GPU friendly, single-pass clustering algorithm which assigns every pixel to a local cluster based on similarity in color and depth. Clusters seeds are fixed on the image in a hexagonal pattern.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color [ImageProto]: Color image used for superpixel computation
- depth [ImageProto]: Depth image used for superpixel computation
- points [ImageProto]: Pixel points
- edges [ImageProto]: Pixel edges
- normals [ImageProto]: Pixel normals
Outgoing messages
- superpixels [SuperpixelsProto]: The computed superpixels
- surflets [CompositeProto]: Computed superpixel surflets as a composite type. Surflets are published as batches. Each surflet has the following information: pixel coordinate, color, point, normal
Parameters
- seed_radius [int] [default=1]: Pixel radius over which initial superpixel features are averaged
- delta [int] [default=32]: The size of the region of influence of a superpixel
- px_expected_point_distance [double] [default=0.04]: Various parameters for superpixel computation
- px_expected_normal_distance [double] [default=0.05]:
- px_expected_color_distance [double] [default=0.25]:
- px_weight_point [double] [default=0.0]:
- px_weight_normal [double] [default=0.0]:
- px_weight_color [double] [default=3.0]:
- sp_expected_point_distance [double] [default=0.17]:
- sp_expected_normal_distance [double] [default=0.15]:
- sp_expected_color_distance [double] [default=0.27]:
- sp_weight_point [double] [default=1.0]:
- sp_weight_normal [double] [default=1.0]:
- sp_weight_color [double] [default=3.0]:
- regularization [double] [default=0.25]:
- smoothing [double] [default=1.0]:
- use_gpu [bool] [default=true]: If enabled GPU accelerated CUDA kernels are used; otherwise computations are done on CPU.
- show_boundaries [bool] [default=true]: If enabled superpixel color visualization will show boundaries. This is slightly slower.
isaac.superpixels.SuperpixelImageLabeling¶
Description
This component creates a pixel-wise segmentation of the original camera based on a superpixel labeling.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- superpixels [SuperpixelsProto]: Superpixels used to segment the image
- labels [SuperpixelLabelsProto]: Superpixels labels used to label pixels
Outgoing messages
- class_segmentation [ImageProto]: Computed segmentation which labels every pixel of the original camera image
Parameters
- label_invalid [int] [default=2]: The output label for pixels labeled as invalid for example used for pixels with invalid depth, or pixels which are not assigned to a superpixel.
isaac.surflets.EstimateGroundPlane¶
Description
Fits a plane into 3d points in surflets. We use an iterative plane fitting algorithm with an initial guess based on normal in the cloud. Returns the plane model and a segmentation mask for inliers (points participated in the plane fitting).
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- surflets [CompositeProto]: Surflets in which we want to find the the plane.
Outgoing messages
- plane_model [PlaneProto]: The ground plane after the iterative fitting algorithm finished.
- plane_mask [TensorProto]: The segmentation mask for the fitted plane.
Parameters
- distance_threshold [double] [default=0.1]: The maximum distance that a point can be away from the plane and not considered an outlier.
- max_iterations [int] [default=10]: The maximum number of iterations to run the iterative plane fitting algorithm for.
- initial_guess [Vector4d] [default=]: Initial guess for plane model (normal, offset) used for plane fitting. If not provided, will generate the initial guess from surflet points and normals.
isaac.surflets.MinimumDistanceAssignment¶
Description
This class derives from Assignment base class and implements minimum distance based assignment from model to measurement. Different distance functions can be provided using surflets_distance variable. Assignment_j = argmin_i(distance(s1_i, s1_D_S2 * s2_j))
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- scale [double] [default=0.5]: Scaling parameter for assignment
isaac.surflets.PointCloudClustering¶
Description
This codelet finds clusters of 3d points in the surflets and publishes segmentation mask for all the clusters. It optionally receives a mask for in-plane points from EstimatePlane codelet, and excludes points from the plane from clustering.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- surflets [CompositeProto]: Surflets containing 3d points in which to compute clusters.
- plane_mask [TensorProto]: Segmentation mask for dominant plane in cloud, used if parameter use_plane_mask is set to true
Outgoing messages
- assignment [TensorProto]: Segmentation mask for cluster assignment, of dimension (num clusters x num points) of 0 or 1.
Parameters
- cluster_tolerance [float] [default=0.01]: Threshold in metersfor the closest distance from a point to any point in the cluster for it to be included in the cluster.
- min_cluster_size [int] [default=40]: Minimum number of points. Clusters with less points are ignored.
- max_cluster_size [int] [default=500]: Maximum number of points. Clusters with more points are ignored.
- use_plane_mask [bool] [default=true]: If set, wait for plane_mask and only use outliers in computing clusters. Otherwise ignore the plane mask channel and use all points in computing clusters.
isaac.surflets.PointDistance¶
Description
This class inherits from base component Distance. It implements point to point distance between model and measurement surflets. distance_i = |s1_i - (s1_R_s2 * s2 + s1_P_s2)| ^ 2 distance = sum_j(|s1_a_j - (s1_R_s2 * s2_j + s1_P_s2)| ^ 2) gradient = -2.0 * jacobian(s1_D_s2 * s2_j)T * (s1_a_j - (s1_R_s2 * s2_j + s1_P_s2))
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.surflets.PositionNormalDistance¶
Description
This class inherits from Distance base class and implements plane normal distance distance_i = |p1_i - s1_D_s2 * p2|^2 + |n1_i - s1_R_s2 * n2|^2 distance = sum_j(|p1_a_j - s1_D_s2 * p2_j|^2 + |n1_a_j - s1_R_s2 * n2_j|^2) gradient = sum_j(-2.0 * jacobian(s1_D_s2 * p2_j)T * (p1_a_j - s1_D_s2 * p2_j)
- 2.0 * jacobian(s1_R_s2 * n2_j)T * (n1_a_j - s1_R_s2 * n2_j)))
Type: Component - This component does not tick and only provides certain helper functions.
- Incoming messages
- (none)
- Outgoing messages
- (none)
- Parameters
- (none)
isaac.surflets.SurfletMasking¶
Description
This codelet takes in measurement surflets, object detections and decoder segmentation as input It rectifies the NxN segmentation image into size of detection. It compares if the measurement surflet position is within the detection bounding box and segmentation value is greater than threshold to ensure that the measurement belongs to that object and assigns “1”. If the measurement does not below to object assignment is set to 0.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- surflets [CompositeProto]: Input image superpixels (pixel coordinate)
- detections [Detections2Proto]: Input object detections from resnet object detection module
- segmentation [TensorProto]: Input segmentation tensor from pose estimation. It has the object segmentation mask as a tensor(detection x 128 x 128)
Outgoing messages
- assignment [TensorProto]: Output object association for every image surflet. It contains 0,1. 1’s where the object is and zeros everywhere else. Tensor(dectection x surflets size)
Parameters
- threshold [double] [default=0.5]: Threshold for filtering the surflets
isaac.trajectory_validation.ApplicabilityCheck¶
Description
Implements check functions that return true if given trajectory is within parameterized thresholds of the current state.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- threshold_time [double] [default=]: For this check to return true, there needs to be a state in the trajectory that is within threshold_time of the current state time.
- threshold_pos_x [double] [default=]: Threshold on how far position in x may be
- threshold_pos_y [double] [default=]: Threshold on how far position in y may be
- threshold_heading [double] [default=]: Threshold on how far heading may be
- threshold_linear_speed [double] [default=]: Threshold on how far linear speed may be
- threshold_angular_speed [double] [default=]: Threshold on how far angular speed may be
- threshold_linear_acceleration [double] [default=]: Threshold on how far linear acceleration may be
- threshold_angular_acceleration [double] [default=]: Threshold on how far angular acceleration may be
isaac.trajectory_validation.CollisionCheck¶
Description
Implements check functions that return true if given trajectory is able to keep at least desired minimum distance from the obstacles.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- disable_check [bool] [default=false]: If true, this check will return true
- static_frame [string] [default=”world”]: Frame to used when checking distance to obstacles
- plan_frame [string] [default=”odom”]: Frame of the trajectory
- obstacle_names [std::vector<std::string>] [default=]: List of obstacles for collision check. The ObstacleAtlas is queried.
- deceleration_limit [double] [default=1.0]: Each DOF of the robot be assumed to be able to slow down with an acceleration equal to minus deceleration_limit. For a two active wheeled differential base robot, each wheel will be assumed to decelerate with this value.
- distance_threshold [double] [default=0.1]: The minimum distance a trajectory should keep from obstacles in order to be valid
- time_step_trajectory [double] [default=0.1]: Delta time to use when checking the states of trajectory. Smaller value means sampling the input trajectory more frequently.
- time_step_brake [double] [default=0.1]: Delta time to use when checking the states within braking distance. Smaller value means sampling the braking trajectory more frequently.
- brake_time_limit [double] [default=60]: Upper limit on the time it takes to fully stop. This parameter protects from the case where slowing down capability, parameterized with deceleration_limit, is extremely small. In such a case, without limit on time, checking the braking trajectory with default time_step_brake parameter would be computationally heavy due to unreasonably high number of samples.
- path_visualization_length [int] [default=1000]: Maximum number of locations to show when visualizing the path. This will correspond to the capacity of the collision_path argument.
isaac.trajectory_validation.FeasibilityCheck¶
Description
Implements check functions that return true if given trajectory is realistic, i.e., integration errors are within the parameterized limits.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- limit_pos_x [double] [default=]: Limit on integration error per second in x position
- limit_pos_y [double] [default=]: Limit on integration error per second in y position
- limit_heading [double] [default=]: Limit on integration error per second in heading
- limit_linear_speed [double] [default=]: Limit on integration error per second in linear speed
- limit_angular_speed [double] [default=]: Limit on integration error per second in angular speed
isaac.trajectory_validation.RangeCheck¶
Description
Implements check function that returns true if given values are within limits described by the min and max parameters.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- type [Type] [default=]: Determines type of the values to check, e.g., linear or angular speed
- min [double] [default=]: Minimum for each value
- max [double] [default=]: Maximum for each value
isaac.trajectory_validation.SpeedCheck¶
Description
Implements check functions that return true if given trajectory respects the speed constraints described by the valid_speed_region parameter.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- valid_speed_region [geometry::Polyline2d] [default=]: List of linear and angular speed pairs that forms the polygon where speed values are valid.
First element of the each point is linear speed in m/s and the second element of each point is angular speed in rad/s. Example layout:
- “valid_speed_region”: [
- [0, 0.5], [1.0, 0.0], [0.0, -0.5], [-0.25, 0.0]
]
In this example, state is valid if the robot is stationary and invalid if the linear speed exceeds 1.0, no matter what the angular speed is.
isaac.trajectory_validation.StateValidation¶
Description
Validates the current state using the checks plugged in as components. For example, if a trajectory_validation::SpeedCheck component is added to the same node as this codelet, the state is checked using the criteria described in SpeedCheck component. This codelet passes through the command if the state is valid. Otherwise it stops the robot by sending zero commands. For collision check, odometry message is needed to get position data.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- state [StateProto]: Contains current state, in the form of messages::DifferentialBaseDynamics
- odometry [Odometry2Proto]: Contains the odometry information such as position, velocity, and acceleration
- desired_command [StateProto]: Command values desired by the controller, in the form of messages::DifferentialBaseControl
Outgoing messages
- safe_command [StateProto]: Output command values, in the form of messages::DifferentialBaseControl. This command equals input desired_command if the state is valid. Otherwise, zero commands are published as safety device on hardware would do.
Parameters
- robot_model [string] [default=”navigation.shared_robot_model”]: Name of the robot model node
- disable_checks [bool] [default=false]: If true, desired command is forwarded without running the checks
isaac.trajectory_validation.TrajectorySelection¶
Description
Forwards the primary trajectory if valid. Otherwise forwards the alternative trajectory if valid. If neither of them is valid, an empty trajectory is published to signal an emergency stop.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- is_valid_primary [BooleanProto]: Boolean message regarding whether the primary trajectory is valid
- trajectory_primary [DifferentialTrajectoryPlanProto]: List of timestamped states forming the primary trajectory
- is_valid_alternative [BooleanProto]: Boolean message regarding whether the alternative trajectory is valid
- trajectory_alternative [DifferentialTrajectoryPlanProto]: List of timestamped states forming the alternative trajectory
Outgoing messages
- trajectory_output [DifferentialTrajectoryPlanProto]: Trajectory for the controller
Parameters
- try_alternative [bool] [default=true]: If an alternative planner is not available, set this flag to false. If the primary trajectory is not valid, we will fallback directly to the emergency plan. If this flag is true, no trajectory will be forwarded until all input channels receive a message.
isaac.trajectory_validation.TrajectoryValidation¶
Description
Validates a trajectory candidate using the checks plugged in as components. For example, if a trajectory_validation::SpeedCheck component is added to the same node as this codelet, the trajectory is checked using the criteria described in SpeedCheck component. While trajectory message is needed for any check, the applicability check also needs odometry message to compare if the trajectory is applicable to the current state of the robot. The validness is published as a message.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- trajectory [DifferentialTrajectoryPlanProto]: Trajectory is a list of timestamped states TODO: Replace with CompositeProto
- odometry [Odometry2Proto]: Contains the odometry information such as position, velocity, and acceleration
Outgoing messages
- is_valid [BooleanProto]: Boolean message regarding whether the trajectory is valid
Parameters
- robot_model [string] [default=”navigation.shared_robot_model”]: Name of the robot model node
- warning_period [double] [default=5.0]: Sets how frequently a message will be printed to console if a check fails. The unit is seconds.
- disable_checks [bool] [default=false]: If true, valid message is published without running the checks
isaac.utils.CompositeToDifferentialTrajectoryConverter¶
Description
Converts from composite type to a differential state trajectory in a chosen coordinate frame
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- input_plan [CompositeProto]: Composite message that contains time series of poses and velocities to form the trajectory.
Outgoing messages
- output_plan [DifferentialTrajectoryPlanProto]: Output message that contains time series of poses and velocities to form the trajectory.
Parameters
- frame [string] [default=]: The desired frame in which to publish the trajectory
isaac.utils.DetectionUnprojection¶
Description
Takes detections with bounding boxes in pixel coordinates and projects them into robot coordinates to output poses relative to the robot frame.
For a point of interest in camera image, we can get a 3D translation relative to the camera frame using (1) camera intrinsics, (2) depth information, and (3) location on the image. The question is which location to use. For each detection, we have a bounding box. Naive approach would be to pick only the center location. For robustness, we generalize this idea below.
- For each detection, we would like to focus around the center of bounding box, because every pixel of bounding box is not going to belong to the object of interest. 2. We get the region of interest (ROI) by shrinking bounding box using roi_scale. 3. Around each of the 4 corners of ROI, we create a small bounding box called unprojection_area. 4. We take average of points (represented in the camera frame) for every pixel of the 4 unprojection_areas to get our final estimate.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Input depth image to use to find real-world coordinates of bounding boxes
- intrinsics [CameraIntrinsicsProto]: Pinhole model and other distortion parameters for depth camera
- detections [Detections2Proto]: Bounding box in pixel coordinates and class label of objects in an image
Outgoing messages
- detections_with_poses [Detections3Proto]: Output list of detections with their 3D poses populated by this codelet
Parameters
- roi_scale [double] [default=0.25]: Scale factor for getting the region of interest (ROI) from detection bounding box. Please see codelet summary above for details.
- spread [Vector2i] [default=Vector2i(10, 10)]: In pixels, half dimensions of the unprojection_areas in row and column. Please see codelet summary above for details.
- invalid_depth_threshold [double] [default=0.05]: Depth values smaller than this value are considered to be invalid.
isaac.utils.Detections3Filter¶
Description
Receives detections as message, filters them using parameters, and publishes them as message. It also shows the region of interest for the center of detection in 2D and 3D on Sight. As an example usage, imagine a cart delivery robot. Robot first drives to the pickup area to detect the cart to carry. In case multiple detections are made, the robot would not know which cart to carry. This codelet can help the selection by dropping the carts that are out of region of interest.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections_in [Detections3Proto]: Input list of detections with their 3D poses
Outgoing messages
- detections_out [Detections3Proto]: Output list of detections with their 3D poses
Parameters
- labels [std::vector<std::string>] [default={}]: If non-empty, only the detections whose label is in this list will pass.
- min_confidence [double] [default=0.0]: Detections need at least this confidence to pass.
- detection_frame [string] [default=]: Reference frame for the poses in detections message.
- roi_frame [string] [default=]: Reference frame for the region of interest (roi). If not set, detection_frame is used.
- threshold_translation [Vector3d] [default=]: Only the detections whose pose are within this threshold of roi_frame will be published. Values are (x, y, z).
- threshold_rotation [Vector3d] [default=]: Only the detections whose pose are within this threshold of roi_frame will be published. Values are (roll, pitch, yaw).
isaac.utils.DetectionsToPoseTree¶
Description
Writes received detections to the pose tree. Input is a list of detections with 3D poses. Output is the updated pose tree. Coordinate frame where the detections are made is a parameter to this codelet.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of object detections made, potentially by an object detection model using camera
- Outgoing messages
- (none)
Parameters
- detection_frame [string] [default=]: Frame where detections are made
- reference_frame [string] [default=]: Optional frame to use when writing poses to the PoseTree. If not set, detection_frame is used.
- label [string] [default=]: If set, we only write detection with this label to the pose tree.
- report_success [bool] [default=false]: Whether to report success upon writing to PoseTree
isaac.utils.DifferentialTrajectoryToPlanConverter¶
Description
Converts a differential state trajectory to a plan in a chosen coordinate frame TODO This should be changed to output a differential trajectory plan proto
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- original_trajectory [DifferentialTrajectoryPlanProto]: The original trajectory in some coordinate frame
Outgoing messages
- plan [Plan2Proto]: The computed plan in the desired coordinate frame
Parameters
- frame [string] [default=]: The desired frame in which to publish the plan
isaac.utils.FlatscanToPointCloud¶
Description
Converts a flatscan to a 3D point cloud. This is useful to run point cloud based algorithms on flatscan for example for scan-to-scan matching. In many cases however much more efficient algorithms could be written for the two-dimensional case of a flatscan.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Input flatscan
Outgoing messages
- cloud [PointCloudProto]: Output 3D point cloud
- Parameters
- (none)
isaac.utils.JoystickConfirmation¶
Description
Using joystick data received as a message, reports success or failure. A potential use of this codelet is to to test behavior trees that are missing the intermediate steps, e.g., say the application has a sequence behavior: A -> B -> C and say B is not implemented yet. To test the rest of the app, this codelet can be used to manually report success or failure.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- joystick_state [JoystickStateProto]: Joystick state including information about which buttons are pressed
- Outgoing messages
- (none)
Parameters
- success_button_index [int] [default=6]: When the button with this ID is pressed on the joystick, this codelet reports success. For a PlayStation Dualshock 4 Wireless Controller, this button may correspond to ‘L2’.
- failure_button_index [int] [default=7]: When the button with this ID is pressed on the joystick, this codelet reports failure. For a PlayStation Dualshock 4 Wireless Controller, this button may correspond to ‘R2’.
isaac.utils.Plan2Converter¶
Description
Converts a plan from its current into a chosen coordinate frame
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- original_plan [Plan2Proto]: The original plan in some coordinate frame
Outgoing messages
- plan [Plan2Proto]: The computed plan in the desired coordinate frame
Parameters
- frame [string] [default=]: The desired frame in which to publish the plan
isaac.utils.Pose2GaussianDistributionEstimation¶
Description
Estimates mean and covariance distribution parameters from a list of weighted samples for the case of SE(2), i.e. translation and rotation in two-dimensional space.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- samples [Pose2Samples]: A list of samples of Pose2 type
Outgoing messages
- mean_and_covariance [Pose2MeanAndCovariance]: Mean and covariance of the received pose samples
Parameters
- lhs_frame [string] [default=]: If set the mean will also be written to the pose tree. The name of the target frame is composed from the parameters lhs_frame and rhs_frame as: lhs_frame_T_rhs_frame.
- rhs_frame [string] [default=]: See comment for lhs_frame.
isaac.utils.PoseEvaluation¶
Description
Compares two poses over time and computes various comparison metrics. This can for example be used to measure localization performance by evaluating tracked robot pose against ground truth. The codelets computes results every tick and shows them in sight. It also publishes data in bulk as JSON periodically for further processing.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- samples [JsonProto]: A report consisting of all samples since the last time samples where published. The JSON
is an array where each entry has the following format: {
“reference_T_expected”: [px, py, pa], “reference_T_actual”: [px, py, pa], “pose_delta”: [dx, dy, da], “speed_delta”: [dx, dy, da]} Here pose_delta is computed as Log(expected_T_ref * reference_T_actual) and speed_delta is Log(actual_prev_T_ref * reference_T_actual) - Log(expected_prev_T_ref * reference_T_expected). Where *_prev indicate the position of a frame at the the previous timestamp. pose_delta measures how well the pose of the two frames match while speed_delta measures how well their speed matches. A frame can match quite well in pose but be very unstable over time.
Parameters
- delay [double] [default=0.250]: Errors are computed in the past. This solves potential problems with tick order.
- reference_frame [string] [default=]: Reference frame in which the two transformations are compared
- expected_frame [string] [default=]: Coordinate frame name of the expected transformation
- actual_frame [string] [default=]: Coordinate frame name of the actual transformation
- histogram_sample_count [int] [default=100]: Number of samples in the histograms
- histogram_linear_resolution [double] [default=0.005]: Position resolution of pose delta and speed delta histograms
- histogram_angular_resolution [double] [default=0.0025]: Angular resolution of pose delta and speed delta histograms
- batch_size [int] [default=100]: Number of samples to publish per message. If set to 0 no samples will be published
isaac.utils.PoseMonitor¶
Description
Generates json report per tick for a list of poses as Pose2d, relative to a reference frame. In each tick, as pose is included in the log if it’s available on the pose tree. Input is a list of names for the poses, and the name of the reference frame. Output is the json log including list of pose name to 2d pose.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- report [Json]: The log of poses as json message.
Parameters
- reference_frame [string] [default=”world”]: Name of the reference frame.
- pose_names [std::vector<std::string>] [default={}]: List of names for the poses to report.
isaac.utils.PoseTreeFeed¶
Description
Broadcast all pose tree updates as proto messages
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- pose [PoseTreeEdgeProto]: proto edge, including the lhs, rhs, and the pose
- Parameters
- (none)
isaac.utils.RigidBodiesToDetections¶
Description
Receives rigid bodies in 3D and publishes them as detections. This codelet can be used, for example, receiving ground truth information for objects in NavSim and converting this information to imitate object detections to be used in various applications, e.g. Kaya approaching an apple.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bodies [RigidBody3GroupProto]: Input information regarding rigid bodies in 3D
Outgoing messages
- detections [Detections3Proto]: Output list of objects with poses in 3D
Parameters
- confidence [double] [default=0.99]: Output detections will have this prediction confidence
isaac.utils.RigidBodyToOdometry¶
Description
Computes ground truth odometry from RigidbodyProto, publishes the result in Odometry2Proto and writes the odom_T_robot pose to pose tree. The codelet can be used, for example, when we do not have base state information, but can obtain perfect ground truth from the rigid body sink from simulation.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- bodies [RigidBody3GroupProto]: Receives rigidbody3 with 3d pose, velocity and acceleration.
Outgoing messages
- odometry [Odometry2Proto]: Publishes odometry for a selected rigid body.
Parameters
- rigid_body_name [string] [default=]: Name of the rigid body whose odometry we want to compute.
- odometry_frame [string] [default=”odom”]: Name of the odometry frame.
- robot_frame [string] [default=”robot”]: Name of the robot frame.
- reference_frame [string] [default=]: Name of the reference frame. If provided, reference_T_robot will be set to pose tree as well. This can be used as ground-truth localization.
- is_holonomic [bool] [default=false]: If true, outputs lateral speed/acceleration in odometry message. Otherwise lateral speed and acceleration are set to zero per differential base model.
isaac.utils.SegmentationCameraProtoSplitter¶
Description
Splits a SegmentationCameraProtoSplitter into ImageProtos for class and instance labels and into a PinholeProto. This is a temporary component for deprecating SegmentationCameraProto.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- segmentation_camera [SegmentationCameraProto]: The incoming message which will be split
Outgoing messages
- class_labels [ImageProto]: The class label segmentation image
- instance_labels [ImageProto]: The instance label segmentation image
- pinhole [PinholeProto]: The pinhole stored in the incoming message
- label_map [LabelProto]: The labels from the incoming message
- Parameters
- (none)
isaac.utils.SendTextMessages¶
Description
Publishes text messages one by one from the given list in a loop.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- text_output [ChatMessageProto]: Sends out a single text string periodically.
Parameters
- text_list [std::vector<std::string>] [default=]: List of text messages to send
- initial_delay [double] [default=0.0]: Delay (in seconds) before publishing the first text message
isaac.utils.WaitUntilDetection¶
Description
Reports success once an object is detected. Detections are received as messages. This codelet keeps ticking until a message with at least one object detection with desired label is received.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of object detections made, potentially by an object detection model using camera
- Outgoing messages
- (none)
Parameters
- label [string] [default=]: If set, we wait until a detection with this label is made. Otherwise, we report success after any detection.
isaac.utils.deprecated.ColorCameraProtoSplitter¶
Description
Splits a ColorCameraProto into an ImageProto and a PinholeProto. This is a temporary component for deprecating ColorCameraProto.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_camera [ColorCameraProto]: The incoming message which will be split
Outgoing messages
- image [ImageProto]: The image stored in the incoming message
- pinhole [PinholeProto]: The pinhole stored in the incoming message
- intrinsics [CameraIntrinsicsProto]: The full image intrinsics including pinhole and distortion parameters
Parameters
- only_pinhole [bool] [default=true]: If the parameter is set, only the Pinhole image parameters are published. Could be used for undistorted camera frames that have only pinhole parameters.
isaac.utils.deprecated.DepthCameraProtoSplitter¶
Description
Splits a DepthCameraProto into an ImageProto and a PinholeProto. This is a temporary component for deprecating DepthCameraProto.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth_camera [DepthCameraProto]: The incoming message which will be split
Outgoing messages
- image [ImageProto]: The image stored in the incoming message
- pinhole [PinholeProto]: The pinhole stored in the incoming message
- intrinsics [CameraIntrinsicsProto]: The full image intrinsics including pinhole and distortion parameters if needed
Parameters
- only_pinhole [bool] [default=true]: If the parameter is set, only the Pinhole image parameters are published. Could be used for undistorted camera frames that have only pinhole parameters.
isaac.velodyne_lidar.VelodyneLidar¶
Description
A driver for the Velodyne VLP16 Lidar.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- scan [RangeScanProto]: A range scan slice published by the Lidar
Parameters
- ip [string] [default=”192.168.2.201”]: The IP address of the Lidar device
- port [int] [default=2368]: The port at which the Lidar device publishes data.
- type [VelodyneModelType] [default=VelodyneModelType::VLP16]: The type of the Lidar (currently only VLP16 is supported).
isaac.viewers.BinaryMapViewer¶
Description
Visualizes a binary map with sight. On the one hand it provides a basic visualization which just displays the map as an image in sight. On the other hand it creates an advanced visualization which visualizes the binary map on a camera image or in 3D.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- binary_map [ImageProto]: The binary map to visualize with sight (Image1ub, 0 means free and 255 occupied)
- binary_map_lattice [LatticeProto]: Lattice information of the binary map
- Outgoing messages
- (none)
Parameters
- min_interval [double] [default=0.05]: The minimum time which has to elapse before we publish data to sight again.
- smooth_boundary [bool] [default=true]: If enabled boundary and interior visualization will use smooth boundaries
isaac.viewers.DepthCameraViewer¶
Description
DepthCameraViewer visualizes a depth camera image in sight. This is useful to limit the bandwidth used for visualization.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- depth [ImageProto]: Image containing depth information to visualize
- intrinsics [CameraIntrinsicsProto]: Pinhole model and other distortion parameters for depth camera
- Outgoing messages
- (none)
Parameters
- target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor
- min_visualization_depth [double] [default=0.0]: Minimum depth in meters used in color grading the depth image for visualization
- max_visualization_depth [double] [default=32.0]: Maximum depth in meters used in color grading the depth image for visualization
- colormap [std::vector<Vector3i>] [default=]: A color gradient used for depth visualization. The min_visualization_depth gets mapped to the first color, the max gets mapped to last color. Everything else in between gets interpolated.
- camera_name [string] [default=”“]: Name of the camera used to get the camera pose from the pose tree (optional)
- enable_depth_point_cloud [bool] [default=false]: Enable depth point cloud visualization, can slow down sight if too many points are being drawn
isaac.viewers.Detections3Viewer¶
Description
Visualize detections with poses in 3D. Unlike DetectionsViewer, where detections are displayed on images, Detections3Viewer shows the locations of detections. This pose information can be potentially supplied by DetectionUnprojection codelet. It provides the choice of rendering in 3D a mesh, a bounding box, or a sphere.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections3Proto]: List of detections with their 3D poses in robot frame
- Outgoing messages
- (none)
Parameters
- radius [double] [default=]: Radius used when visualizing the detections TODO: Can we get this value from DetectionUnprojection?
- mesh_name [string] [default=]: Name of the mesh in sight
- object_T_box_center [Pose3d] [default=Pose3d::Identity()]: Position of the center of the bounding box. Only used if bounding boxes are not contained in the detections3 message.
- box_dimensions [Vector3d] [default=]: Dimensions of the bounding box. Only used if bounding boxes are not contained in the detections3 message.
- detections_color [Vector4ub] [default=Vector4ub(118, 185, 0, 255)]: Color of the detections
- frame [string] [default=]: Reference frame of the detection. TODO(ben): this should come from the Detections3Proto.
isaac.viewers.DetectionsViewer¶
Description
This codelet shows detections received via message. It can be used to visualize the 2D bounding boxes of detections on an image.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- detections [Detections2Proto]: Bounding box in pixel coordinates and class label of objects in an image
- Outgoing messages
- (none)
Parameters
- reduce_scale [int] [default=1]: Reduction factor for bounding boxes, values greater than one will shrink the box by that amount. Should match the factor of the image being drawn upon.
- border_background_color [Pixel3ub] [default=Colors::Black()]: Background border color for the bounding boxes
- border_foreground_color [Pixel3ub] [default=Colors::NvidiaGreen()]: Foreground border color for the bounding boxes
- border_background_width [double] [default=4.0]: Background border width for the bounding boxes
- border_foreground_width [double] [default=2.0]: Foreground border width for the bounding boxes
- font_size [double] [default=30.0]: Foxt size for the class label displayed
- textbox_height [double] [default=35.0]: Height of the textbox in which the class label is displayed
- minimum_textbox_width [double] [default=80.0]: Minimum width for the textbox. If the bounding box width is greater than this, the textbox width will be set to the bounding box width instead.
- enable_labels [bool] [default=true]: If true, displays the text label under each bounding box
isaac.viewers.FiducialsViewer¶
Description
Receives fiducials and shows them in Sight. Users can provide configurations for specific tags. If no configuration is supplied for a tag, it will be shown with default color with the tag id as a text below it.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- fiducials [FiducialListProto]: The input channel to receive fiducial detections
- Outgoing messages
- (none)
Parameters
- text_size [double] [default=30.0]: The size of the text used in sight, in pixels (px)
- id_specific_configs [Json] [default=Json::object()]: An optional json object for configuring tag visualization for certain tags. Currently supported parameters are “color_fill”, “text_below”, and “text_above”. Color needs to be in a valid JavaScript format, e.g., “#C0C0C0”, “#C0C0C05C”, “rgb(255, 99, 71), or “white”. Here is an example layout:
{ "tag36h11_6": { "text_below": "Metal", "color_fill": "#C0C0C05C" }, "tag36h11_7": { "text_below": "Compost", "color_fill": "#FFA95F5C" }, "tag36h11_8": { "text_below": "Paper", "color_fill": "#F2EECB5C" }, "tag36h11_9": { "text_below": "Paper -->", "text_above": "<-- Metal" } }
isaac.viewers.FlatscanViewer¶
Description
Visualizes a flatscan at the estimated position.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- flatscan [FlatscanProto]: Incoming range scan used to localize the robot
- Outgoing messages
- (none)
Parameters
- beam_skip [int] [default=4]: The number of beams which are skipped for visualization
- map [string] [default=”map”]: Map node to use for localization
- range_scan_model [string] [default=”shared_robot_model”]: Name of the robot model node
- flatscan_frame [string] [default=”lidar”]: Frame which flatscan is defined at
isaac.viewers.GoalViewer¶
Description
In Sight, shows the goal received as a message using the shape of the robot model.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- goal [Goal2Proto]: The target destination received
- Outgoing messages
- (none)
Parameters
- robot_model [string] [default=”shared_robot_model”]: Name of the robot model node
isaac.viewers.ImageKeypointViewer¶
Description
The ImageKeypointViewer visualizes tensors by assigning color randomly to feature id.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- coordinates [TensorProto]: Coordinates for image keypoints
- features [TensorProto]: Optional. Image features ids. If received with the same (coordinates)
timestamp, feature colors will be selected randomly and be persistent between ticks. Features with the same id will have the same color. If enable_features is not set all features will be rendered with the same color. See the “color” parameter.
A rank-1 int tensor with dimensions (number of features).
- Outgoing messages
- (none)
Parameters
- enable_features [bool] [default=true]: Enable features RX channel. If it’s enabled, rendering will wait until coordinates and features with the same timestamp are received.
- radius [float] [default=3]: Radius in pixel of a circle drawn around every image feature which is visualized in sight.
- reduce_scale [int] [default=1]: Reduction factor for the key points, values greater than one will shrink the box by that amount. Should match the factor of the image being drawn upon.
- color [Vector3ub] [default=]: Optional. If it is set then all keypoints will be the same color even if features ids are present.
isaac.viewers.ImageViewer¶
Description
Visualizes images received by message in sight. This is useful to limit the bandwidth used for visualization.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- image [ImageProto]: Input image to visualize
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
- Outgoing messages
- (none)
Parameters
- target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight.
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor.
- use_png [bool] [default=false]: Renders tensor image as PNG if true, otherwise renders as JPG
- camera_name [string] [default=”“]: Frame of the camera (to get the position from the PoseTree)
isaac.viewers.MosaicViewer¶
Description
Receives tensors on multiple channels, combines them into a mosaic and visualizes them with sight. Each channel is mapped to a fixed panel in the mosaic view. The number of panels is determined by the rx tags in the graph.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- tile_dimensions [Vector2i] [default=Vector2i(360, 640)]: Dimensions of one tile in the mosaic. Images will be resized to fit the tile.
- tiles_per_column [int] [default=2]: Number of tiles per row in the mosaic.
- margin [int] [default=10]: Number of pixels of the margin for each panel
- colormap [std::vector<Vector3ub>] [default=]: List of colors for the margin and text of each panel
isaac.viewers.ObjectViewer¶
Description
Visualizes object in 3D after reading its pose from the PoseTree. Provides the choice of rendering in 3D a mesh or a bounding box. If such renderings are not required, interactive markers can be used instead to visualize poses of PoseTree.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- frame [string] [default=]: Frame of the object to visualize
- mesh_name [string] [default=]: If set, this mesh will be rendered
- bounding_box [geometry::BoxD] [default=]: If set, this bounding will be rendered
isaac.viewers.OccupancyMapViewer¶
Description
Visualizes an occupancy map with sight. It shows both mean and standard deviation in one single image. Areas with high confidence are shown in black or white. Areas with low confidence go towards grey.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- occupancy_map [ImageProto]: The occupancy map to visualize with sight
- occupancy_map_lattice [LatticeProto]: The occupancy lattice information about the grid
- Outgoing messages
- (none)
Parameters
- min_interval [double] [default=0.05]: The minimum time which has to elapse before we publish data to sight again.
isaac.viewers.ParkingSpotListViewer¶
Description
Visualizes parking spots with sight using simple 2D polylines. The parking spot box and entry line are drawn separately with custom colors.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- parking_spot_list [ParkingSpotListProto]: Input parking spot list to visualize
- Outgoing messages
- (none)
Parameters
- as_pixel_coordinate [bool] [default=false]: If true, the polyline is visualized in pixel coordinate (2d image frame). Otherwise it’s visualized in 3d frame given by the frame parameter
- frame [string] [default=”robot”]: The name of the coordinate frame for the parking spots. Only used if as_pixel_coordinate is false
- visualization_z [double] [default=0.05]: z value when visualizing the polyline2 in 3d as polyline3
- parking_spot_box_color [Vector3ub] [default=(Vector3ub{255, 0, 0})]: The color of the parking spot box visualization
- parking_spot_entry_line_color [Vector3ub] [default=(Vector3ub{0, 255, 0})]: The color of the parking spot entry line visualization
isaac.viewers.Plan2Viewer¶
Description
Receives a global plan as Plan2Proto and visualizes it in sight
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- plan [Plan2Proto]: Receives the plan to visualize
- Outgoing messages
- (none)
Parameters
- color [Vector4ub] [default=(Vector4ub{0, 200, 0, 255})]: Color of the path to display in sight
isaac.viewers.PointCloudViewer¶
Description
Visualizes a point cloud in sight. This component is useful to limit the overall bandwidth when displaying a point cloud.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- cloud [PointCloudProto]: The point cloud which will be visualized in sight.
- Outgoing messages
- (none)
Parameters
- target_fps [double] [default=10.0]: Maximum framerate at which images are displayed in sight.
- skip [int] [default=11]: If set to a value greater than 1 points will be skipped. For example skip = 2 will skip half of the points. Use this value to limit the number of points visualized in sight.
- max_distance [double] [default=5.0]: Points which have a depth (z-component) greater than this value will be skipped
- frame [string] [default=]: The coordinate frame in which the point cloud is visualized.
isaac.viewers.Polyline2Viewer¶
Description
Visualizes 2D polyline with sight.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- polyline [Polyline2dProto]: Input polyline to visualize
- Outgoing messages
- (none)
Parameters
- as_pixel_coordinate [bool] [default=false]: If true, the polyline is visualized in pixel coordinate (2d image frame). Otherwise it’s visualized in 3d frame given by the frame parameter
- frame [string] [default=”robot”]: The name of the coordinate frame of the polyline. Only used if as_pixel_coordinate is false
- visualization_z [double] [default=0.05]: z value when visualizing the polyline2 in 3d as polyline3
- polyline_color [Vector3ub] [default=(Vector3ub{255, 255, 0})]: The color of the polyline visualization
isaac.viewers.PoseTrailViewer¶
Description
Visualizes the path over time of a given pose as a 2D line in Sight. The pose trail can be drawn as an overlay over 3D and 2D navigation map views. The pose for visualization is obtained from the Pose Tree.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
- Outgoing messages
- (none)
Parameters
- lhs_frame [string] [default=”world”]: Defines the name of the left hand side reference frame of the pose.
- rhs_frame [string] [default=”robot”]: Defines the name of the right hand side reference frame of the pose.
- trail_count [int] [default=30]: Defines the number of the previous locations included in the pose trail.
- trail_time_step [double] [default=0.5]: Defines the time difference between the points shown as part of the trail.
isaac.viewers.SegmentationCameraViewer¶
Description
Class that receives Segmentation camera information from simulator
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- class_segmentation [ImageProto]: The segmentation_listener object receives 8 bit class label image or a 16 bit instance label image, or both. It also receives the pinhole camera model for the segmentation image. Image containing class label information
- instance_segmentation [ImageProto]: Image containing instance label information
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
- Outgoing messages
- (none)
Parameters
- target_fps [double] [default=30.0]: Target FPS used to show images to sight, decrease to reduce overall bandwidth needed
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that amount
- camera_name [string] [default=]: Frame of the camera (to get the position from the PoseTree)
isaac.viewers.SegmentationViewer¶
Description
Visualizes a pixel-wise segmentation on top of a camera image. This component supports synchronization and transparency overlay.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color [ImageProto]: The original camera image
- class_segmentation [ImageProto]: Pixel-wise image segmentation which is overlaid on top of the camera image
- intrinsics [CameraIntrinsicsProto]: Intrinsics including pinhole and distortion parameters
- Outgoing messages
- (none)
Parameters
- max_fps [double] [default=20.0]: Maximum FPS for show images to sight which can be used to reduce overall bandwidth
- reduce_scale [int] [default=2]: Reduction factor for image, values greater than one will shrink the image by that amount
- highlight_label [int] [default=0]: The label which will be overlaid on top of the color image.
- highlight_color [Pixel3ub] [default=Pixel3ub(255, 255, 255)]: The color which is used to overlay the label.
- opacity [double] [default=0.5]: Opacity (0.0: full transparent, 1.0: full overlay) of the overlaid labels
- camera_name [string] [default=]: Frame of the camera (to get the position from the PoseTree)
isaac.viewers.SkeletonViewer¶
Description
This codelet takes in a Skeleton2ListProto message containing a list of skeletons with 2D joint locations. It then visualises the 2D joint locations and connectivity. For better visualisation, SkeletonViewer may draw edges that are different from the prior edges list used by OpenPoseDecoder.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- skeletons [Skeleton2ListProto]: A list of skeleton models.
- Outgoing messages
- (none)
Parameters
- labels [std::vector<std::string>] [default=]: List of joints labels to render (as joints of the skeleton model). For example: [“Elbow”, “Wrist”, …]
- edges_render [std::vector<Vector2i>] [default=]: List of edges to render (as edges of the skeleton model). Each edge is defined by a pair of indices into the labels array specified by the ‘labels’ parameter. Indices are zero-based. For example [[0, 1], [2, 3]] will define two edges with the first edge “Elbow” - “Wrist”.
isaac.viewers.TensorViewer¶
Description
“Flattens” and colorizes a tensor into an image and visualizes them with sight. Depending on the element type and rank of the tensor different visualization techniques are used.
- Element type:
- 32-bit floating points are colorized with StarryNightColorGradient using the range specified by the parameter range
- 32-bit integers are colorized using a standard set of random colors
- Rank:
- A rank 1 tensor is reformatted into a rank-2 tensor with tile_columns number of columns. If tile_columns is not specified only a single row will be used.
- A rank 2 tensor is visualized directly using its dimensions.
- A rank 3 tensor is visualized as stitched slices. Slices are extracted based on the storage order. The tile_columns parameters defines how many tiles are used horizontally for the stitched mosaic.
- Tensors with rank 4 or higher are not supported.
Note that dimensions of 1 are ignored, e.g. a 1x1x8 tensor is considered to have rank 1.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- tensor [TensorProto]: A tensor to visualize in sight. Multiple different formats are supported as explained in the class comment.
Outgoing messages
- colorized [ImageProto]: The computed image which is shown via sight is also published as a message on this channel.
Parameters
- tile_columns [int] [default=]: Number of columns in the resulting mosaic image
- rank_3_as_color [bool] [default=false]: If enabled a rank three tensor will be interpreted as a 3-channel RBG image. Otherwise one tile will be generated per channel slices.
- storage_order [TensorViewerStorage] [default=TensorViewerStorage::kPlanar]: Defines how a rank 3 tensor is sliced for visualization.
- range [Vector2d] [default=Vector2d(0.0, 1.0)]: For floating-point tensors values will be clamped to this range.
- render_size [Vector2i] [default=]: Optionally enlarge or shrink the resulting image before visualization with sight.
- use_png [bool] [default=false]: Renders tensor image as PNG if true, otherwise renders as JPG
isaac.viewers.TrajectoryListViewer¶
Description
Visualization channels for trajectories. The component ticks on receiving a collection of trajectories to display. You may use the trajectories channels in compatible 2D or 3D renderers.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- trajectories [Vector3TrajectoryListProto]: The input channel to receive all trajectories to be displayed.
- Outgoing messages
- (none)
Parameters
- renderer_frame [string] [default=”world”]: Renderer frame to transform the trajectories per their respective frames.
isaac.viewers.deprecated.ColorCameraViewer¶
Description
This codelet is deprecated. Please use ImageViewer instead. Visualizes a color camera image in sight. This is useful to limit the bandwidth used for visualization.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_listener [ColorCameraProto]: 8-bit RGB color camera to visualize
- Outgoing messages
- (none)
Parameters
- target_fps [double] [default=30.0]: Maximum framerate at which images are displayed in sight.
- reduce_scale [int] [default=1]: Reduction factor for image, values greater than one will shrink the image by that factor.
- use_png [bool] [default=false]: Renders tensor image as PNG if true, otherwise renders as JPG
- camera_name [string] [default=”“]: Frame of the camera (to get the position from the PoseTree)
isaac.wakanda.WakandaServer¶
Description
The WakandaServer class creates a video streaming server for sending out image to a connected wakanda client. The actual transmission of image and the type of image to be sent is platform dependent and left to be handled by the inner ImageStreamer class. Wakanda server also supports sending auxiliary data, this codelet Currently accept and send out 2d bounding boxes.
Type: Codelet - This component ticks either periodically or when it receives messages.
Incoming messages
- color_listener [ImageProto]: Image to stream to a connected client
- detection_listener [Detections2Proto]: 2D bounding box detections
- Outgoing messages
- (none)
Parameters
- port [int32_t] [default=]: The wakanda server port number used to wait for a client connection
- width [int32_t] [default=]: Width of the streaming video
- height [int32_t] [default=]: Height of the streaming video
- framerate [int32_t] [default=]: Framerate of the streaming video
isaac.ydlidar.YdLidar¶
Description
YDLidar X4 is alow cost LIDAR that is popular with hobbyist.
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- flatscan [FlatscanProto]: A flat scan from the LIDAR Average message covers about 0.4 radians and contains 40 measurements Average message publish rate is 120 messages per second
Parameters
- device [string] [default=”/dev/ttyUSB0”]: Serial port where device is connected
isaac.zed.ZedImuReader¶
Description
Publishes IMU readings obtained from a ZED Mini camera Requires a ZedCamera component to exist in the same node
Type: Codelet - This component ticks either periodically or when it receives messages.
- Incoming messages
- (none)
Outgoing messages
- imu_raw [ImuProto]: IMU data (if available) This is performed on every tick, so IMU poll rate is equal to the codelet tick frequency
Parameters
- vendor_imu_calibration [bool] [default=true]: Enables the vendor correction of IMU readings from bias, scale and misalignment.
- imu_frame [string] [default=”zed_imu”]: The IMU frame used to define the left_camera_T_imu transform in the PoseTree