Render inside a codelet with sight transmitter.
gxf::Parameter<gxf::Handle<SightTransmitter>> sight_tx_;
Then, from a tick function, call the following:
sight_tx_->show("channel_name", time, [&](::isaac::sight::Sop& sop) {
sop.style = sight::SopStyle{...}
sop.transform = sight::SopTransform{...}
// ...
return GXF_SUCCESS;
});
The first parameter is the name of the channel (the full channel name in Sight is
appname/node_name/codelet_name/channel
). The second parameter is the time of the SOP. The final parameter constructs
a valid sight::Sop
that contains what needs to be rendered.
sight::SopStyle
Each rendered element can have a specific style composed of the color, the size, and a filled
flag.
Use a sight::SopStyle
constructor to create a new style:
sight::SopStyle{color};
sight::SopStyle{color, filled};
sight::SopStyle{color, filled, size};
The
color
must be a Pixel3ub, Pixel4ub, or string (or char*) representing a valid JavaScript color:A color name: “red”, “white”, “blue”, etc.
Hexadecimal code: “#ff0000”, “#fff”, etc.
A JavaScript function:
rgb(255,0,0)
,rgba(255,255,255,1.0)
, etc. Note that alpha channel is in the range[0.0, 1.0]
, while colors are in the range[0, 255]
.
filled
is a boolean value (false
by default) determining whether the object is filled or wireframe.The
size
is a scalar, the default being1.0
.
sight::SopTransform
Transformations can be passed in different ways: via frame name, fame UUID, or direct pose.
The preferred way is by frame uuid (or name):
sight::SopTransform{"robot"};
sight::SopTransform{robot_->frame_uid()};
If you pass the name of a frame the SOP data will be posed inside the coordinate frame with the given name. For example if you specify “robot” the things you show via sight will be placed in a coordinate frame with name “robot”. The concrete pose will be retrieved fully automatically from the application pose tree. Most of the time your application is already using many coordinate frames to store the pose of various actors. You can directly use these coordinate frames to pose your visualization data. This method provides the most flexibility and you should use it whenever you can.
Alternatively you can also give the pose directly in form of a Pose2 or Pose3 object. In this case the pose will be relative to general world coordinate frame.
sight::SopTransform{pose};
Note that this coordinate frame does not have a name and can not be reached by the pose tree. The pose parameter must be either a Pose2 or Pose3.
There are two more parameters which you can pass in addition: scale and pinhole.
sight::SopTransform{pose, size};
sight::SopTransform{pose, pinhole};
sight::SopTransform{pose, size, pinhole};
The size parameter is a scalar used to scale the transform. For an image it corresponds to the pixel size. The pinhole parameter enables sight to use an image as background image for an augmented camera image. For example to render lines and rectangles on top of a recorded image.
sight::SopImage
The sight::SopImage class encodes an image to display in sight. PNG and JPEG formats are supported. By default it is jpg, but a parameter to decide between jpg and png can be provided.
sight::SopImage(image);
sight::SopImage(image, use_png);
The image must be of type Image3ub or Image1ub. Image type Image4ub is supported only with the PNG format.
sight::SopMesh
The sight::SopMesh class can be used to display a mesh in a scene. The mesh needs to be registered
via the sight_webserver
configuration (refer to the Configuration
section for the Websight Server for more details).
sight::SopMesh(mesh_name);
sight::SopText
The sight::SopText class can be used to draw text on a canvas.
sight::SopText("text at position (0, 0)");
sight::SopText("text at position (row, col)", Vector2i(row, col));
sight::SopText("text centered at position (row, col)", Vector2i(row, col), true);
sight::SopPointCloud
The sight::SopPointCloud class encodes a list of 2D or 3D points with an optional color
sight::SopPointCloud(points); // Draw a list of points
sight::SopPointCloud(points, 10); // Draw every 10 points
sight::SopPointCloud(points, colors); // Draw a list of points with a color per point
sight::SopPointCloud(points, colors, 10); // Draw every 10 points with a color per point
Points need to be of type SampleCloudConstView2f
or SampleCloudConstView3f
, and color
needs to be of type SampleCloudConstView3ub
.
sight::SopMesh
The sight::SopMesh class encodes a mesh provided block-by-block:
sight::SopMesh mesh(bool append);
mesh.add(string block_name, int num_points, float* points_ptr, int num_triangles,
uint16_t* triangles_ptr, uint8_t* colors = nullptr, float* normals = nullptr);
mesh.add(stringblock_name, vector<float> points, vector<uint16_t> triangles,
vector<uint8_t> colors, vector<float> normals);
The append
parameter indicates whether this should be added to the previous mesh of the same channel or
the message should be interpreted as a new mesh. You can add blocks one by one–each block has a name and
can therefore be updated in the future–and then provide the input by pointer or by vector.
The colors
and normals
parameters are optional. In the vector version, these parameters might be empty,
but if they are not, the size must match the points size.
The points are serialized as [x0,y0,z0, x1,y1,z1, ...,]
, and the triangles are a list of triplets of
indices (referring to the list of points). The colors are triplets (RGB format), and the normals
are serialized as the points.
sight::SopGeometry
The sight::SopGeometry class encodes one simple geometry.
The the following is a list of supported primitives (for all of them N = 2/3, K = double/float/int
):
* geometry::LineSegment<K, N>(Vector<K, N> a, Vector<K, N> b); (Line from a to b)
* geometry::NSphere<K, N>{Vector<K, N> center, K radius}; (circle/sphere)
* geometry::NCuboid<K, N>{Vector<K, N> corner1, Vector<K, N> corner2}; (a rectangle or box)
* geometry::Polygon<K, N>{std::vector<Vector<K, N>>{polygon}}; (a polygon)
* geometry::Polyline<K, N>{std::vector<Vector<K, N>>{polyline}}; (a polyline)
* Vector<K, N>>{point}; (a point)
sight::Sop (Show Operation)
A sight operation is composed of a list of operations to be executed. It can be seen as a tree of operations, each node containing a SopTransform applied to each children and a SopStyle which containing the default style of all the children (if no style is specified). They also contain a list of SopX, hence the tree structure.
Sop has an add()
function that can take a SopX directly (for example SopImage) or crease the SopX
automatically from a list of argument.
To change the transform or style, override the transform
or style
object:
sight_tx_->show("channel_name", time, [&](::isaac::sight::Sop& sop) {
sop.transform = sight::SopTransform{world_T_robot}; // Set the transform where the robot is
sop.style = sight::SopStyle{"red"}; // Set the color to red
sop.add(geometry::CircleD({0.0, 0.0), 1.0); // Draw a red circle at the position of the robot
sop.add([&](sight::Sop& sop) { // Recursive call
sop.style = sight::SopStyle("#0000ff");
for (const auto& pt : path) {
sop.add(CircleD(pt, 0.2)); // Draw a small circle on the path of the robot
}
});
})
Plot
Render plots with one of the following show functions:
sight_tx_->show("channel", allocator, time, value);
sight_tx_->show("channel", allocator, time, dt, values);
value
represents a single value; values
represents a time series starting at time
and separated by dt
.
Configuration
The server can be configured with a YAML a configuration file similar to the following:
---
###################
# Sight Webserver #
###################
name: sight_webserver
components:
- type: nvidia::gxf::PeriodicSchedulingTerm
parameters:
recess_period: 50Hz
- name: sight_router
type: nvidia::isaac::SightRouter
- name: sight_webserver
type: nvidia::isaac::SightWebserver
parameters:
sight_router: sight_router
port: 3000
processing_message_time_limit_ms: 10
webroot: packages/sight/webroot
asset_root: ../isaac_assets
asset_prefix: /apps/assets/
app_name: "isaac"
password: "your_password"
app_config: """{
"windows": {
"Renderer 2D": {
"renderer": "2d",
"dims": { "width": 256, "height": 256 },
"channels": [
{ "name": "appname/node/codelet/channel1", "active": true },
{ "name": "appname/node/codelet/channel2", "active": true },
{ "name": "appname/node/codelet/channel3", "active": true }
]
},
"Renderer 3D": {
"renderer": "3d",
"dims": { "width": 256, "height": 256 },
"channels": [
{ "name": "appname/node/codelet/channel1", "active": true },
]
}
},
"assets": {
"Asse name": {
"obj": "apps/assets/carter.obj",
"txt": "apps/assets/carter_albido.png",
"norm": "apps/assets/carter_normal.png"
}
},
js: [
"additional/js/file/to/load.js"
],
css: [
"additional/css/file/to/load.css"
]
}"""
You must add a webserver component–either nvidia::isaac::SightWebserver
for a simple webserver
or nvidia::isaac::SightWebserverSSL
for access to the frontend via HTTPS. A number of
parameters can be provided:
webroot
: The path to the folder containing the frontend codeassetroot
: The path to the assets folderport
: The port the webserver is listening tobandwidth
: The maximum bandwidth each channel can consume. If this value is too high and the network is saturated, the messages will accumulate on the server side until finally some of them are being dropped; this will also create some visual lag on the frontend between what is displayed and what the robot is actually doing. We recommend setting this value to throttle the network before it gets saturated.app_config.windows
: A list of renderer widgets to be automatically displayedrenderer
: “2d” or “3d”dims
: The size of the rendererchannels
: The renderer channel listname
: The name of the channelactive
: Whether or not the channel is active by default (the default value istrue
)
app_config.assets: The list of assets:
obj: The obj file containing the mesh
txt: The texture file
norm: The file containing the normal information of the 3D mesh
app_config.js
: A list of additional.js
files to load (for custom widgets)app_config.css
: The list of additional.css
to load (for custom widgets)password
: A password to protect the access to the frontend. Note that, once provided, the password will be included in the URL, so it is not safe to use if unauthorized users have access to the computer that displays the frontend.
For nvidia::isaac::SightWebserverSSL:
ssl_key_file_name
: the path to the file containing the ssl keyssl_cert_file_name
: the path to the file containing the ssl certificatessl_passphrase
: the passphrase protecting the key (if any)
If you are using the Python API (//sdk/extensions/sight_web/graphs/sight_web.py
), you can
provide a YAML configuration file to change the default value of the parameters above:
port: 3000
processing_message_time_limit_ms: 10
webroot: packages/sight/webroot
asset_root: ../isaac_assets
asset_prefix: /apps/assets
password: your_password
bandwidth: 10000000
enable_config_bridge: true
use_ssl: false
Execution Optimization
The server is optimized to compute only what is going to be sent to the front end. When a sight::Sop object is provided using a lambda function call, the function is executed if and only if at least one client is currently listening to the channel. Therefore, do not hesitate to abuse the use of the lambda function whenever you execute a complicated display operation such as normalizing, cropping, or resizing, an image.
To communicate with a widget you have created, use SightReceiver to receive messages and SightTransmitter to send information to the frontend:
---
#####################
# Config Visualizer #
#####################
name: sight_config_bridge
components:
- name: transmitter
type: nvidia::isaac::SightTransmitter
parameters:
capacity: 100
policy: 0
- name: config_bridge_receiver
type: nvidia::isaac::SightReceiver
parameters:
message_types: ["config"]
js_files: ["js/config_v2.js"]
css_files: ["css/config_v2.css"]
- name: bridge
type: nvidia::isaac::SightConfigBridge
parameters:
sight_receiver: config_bridge_receiver
sight_transmitter: transmitter
- type: nvidia::gxf::MessageAvailableSchedulingTerm
parameters:
receiver: config_bridge_receiver
min_size: 1
For example, the following setup for the config bridge contains a SightTransmitter to send a JSON message to the frontend:
auto expected_parts = CreateJsonMessage("config_reply", context());
JsonMessageParts& parts = expected_parts.value();
*(parts.json.get()) = std::move(json);
sight_transmitter_->publish(parts.entity);
A SightReceiver is configured to receive messages with the header “config”; they can be read in the same way as messages from a receiver:
auto maybe_querry = sight_receiver_->receive();
if (!maybe_querry) { return GXF_CONTRACT_MESSAGE_NOT_AVAILABLE; }
auto maybe_json = maybe_querry->get<::isaac::Json>();
if (!maybe_json) { return GXF_CONTRACT_MESSAGE_NOT_AVAILABLE; }
const ::isaac::Json* json = maybe_json.value().get();
In addition, the frontend will automatically load the js/config_v2.js
and css/config_v2.css
files.