DeepStream OpenTelemetry Support#
DeepStream provides OpenTelemetry support via nvmultiurisrcbin, enabling the export of pipeline performance metrics to observability platforms like Prometheus and Grafana.
OpenTelemetry support is available for both x86 (DeepStream 9.0 onwards) and Jetson platforms. The Gst-nvdslogger plugin collects performance data (FPS, latency, frame numbers) from the GStreamer pipeline. This data is stored in shared memory structures managed by the nvds_rest_metrics library and exported to OpenTelemetry collectors using OTLP/HTTP.
Note
OpenTelemetry is supported through nvmultiurisrcbin. To activate it, incorporate nvmultiurisrcbin into your pipeline.
Configuration#
Configure OpenTelemetry using environment variables:
Environment Variable |
Description |
|---|---|
|
Set to |
|
Service identifier (e.g., |
|
Collector base URL (e.g., |
|
Metric export interval in milliseconds (default: |
|
Export destination: |
Set the following parameters in the deepstream-test5 application configuration file (/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5/configs/test5_config_file_nvmultiurisrcbin_src_list_attr_all.txt):
[tiled-display]
enable=3
[sinkN]
nvdslogger=1
Supported Prometheus Metrics#
Stream Performance Metrics#
Metric Name |
Description |
Typical Value |
|---|---|---|
|
Frames per second processed for each stream |
25-30 (depends on source) |
|
End-to-end pipeline latency in milliseconds |
30-100ms (lower is better) |
|
Current frame number being processed for each stream |
Monotonically increasing |
|
Total number of active streams being processed |
Based on configuration |
System Resource Metrics#
Metric Name |
Description |
|---|---|
|
CPU utilization percentage across all cores |
|
GPU compute utilization percentage |
|
System RAM memory usage in gigabytes |
|
GPU memory usage in gigabytes |
Note
gpu_memory_gb is not applicable on aarch64 devices (e.g., Jetson Thor) as they use unified memory, so it returns -1.
OpenTelemetry Collector Configuration#
To filter out inactive stream metrics, add the following processor to your collector configuration:
processors:
filter/drop_inactive_streams:
error_mode: ignore
metrics:
datapoint:
- 'metric.name == "stream_fps" and value_double == -1.0'
- 'metric.name == "stream_latency" and value_double == -1.0'
- 'metric.name == "stream_frame_number" and value_int == -1'
If exporting to Prometheus, set metric_expiration >= otlp-interval to drop stale metrics:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 4s
References#
For more information about OpenTelemetry, refer to the following resources:
OTLP/HTTP Specification - OpenTelemetry Protocol specification for HTTP transport
OpenTelemetry Documentation - Official OpenTelemetry documentation
OpenTelemetry Collector - OpenTelemetry Collector documentation