Open Telemetry Setup#
The NeMo microservices use Open Telemetry for observability.
Open Telemetry Collector#
Optionally NeMo Microservices provide Open Telemetry Collector that can be set to receive, process and export telemetry data.
Every microservice helm chart values.yaml has open opentelemetry-collector
key that exposes Open Telemetry Collector configuration. To enable this service set opentelemetry-collector.enabled: true
. See the Open Telemetry Collector helm chart documentation for all configuration options.
Open Telemetry Collector Glossary#
Receivers#
Receivers are the entry points for data ingestion in the Open Telemetry Collector. They are responsible for accepting telemetry data—including traces, metrics, and logs—from various sources and formats. Receivers implement specific protocols or specifications to consume data from different telemetry sources.
Key characteristics of receivers:
Accept data via various protocols, such as OTLP, Jaeger, and Zipkin
Support different data formats, such as protobuf and JSON
Can be configured to listen on specific network interfaces and ports
Processors#
Processors are intermediary components in the Open Telemetry Collector pipeline. They operate on data after it has been received but before it is exported. Processors can modify, filter, or enhance the telemetry data passing through the collector.
Key functions of processors:
Data transformation and normalization
Filtering of unwanted data
Batching of data for improved performance
Addition of metadata or attributes to telemetry data
Exporters#
Exporters are the exit points for data in the Open Telemetry Collector. They are responsible for sending the processed telemetry data to one or more backends, databases, or monitoring systems. Exporters translate the internal data representation of the collector into the format required by the receiving system. Exporters can be pull or push based, and may support one or more data sources. You can see a full list of supported exporters here
Key aspects of exporters:
Support various output formats and protocols
Can be configured to send data to multiple destinations simultaneously
Handle retries and buffering in case of network issues or backpressure
Printing the telemetry data for debugging to stdout
Open Telemetry Collector Default Configurations in values.yaml
:
# Define the receivers section
# Receivers are responsible for accepting telemetry data (such as logs, metrics, and traces) from various sources.
# Think of it as an API endpoint that listens for incoming data. This configuration can accept data via gRPC or HTTP protocols
receivers:
# Configure the OTLP (OpenTelemetry Protocol) receiver
otlp:
# Specify the protocols supported by the OTLP receiver
protocols:
# Enable gRPC protocol support
grpc:
# Enable HTTP protocol support
http:
# Configure CORS (Cross-Origin Resource Sharing) for HTTP
cors:
# Allow requests from any origin
allowed_origins:
- "*"
# Define the exporters section
exporters:
# Configure the debug exporter
debug:
# Set the verbosity level to detailed for more information
verbosity: detailed
# Define the processors section
processors:
# Use default batch processor configuration (groups data before exporting)
batch:
# Define the service section, which ties together receivers, processors, and exporters
service:
# Configure the data processing pipelines
pipelines:
# Set up the traces pipeline
traces:
# Specify receivers for the traces pipeline (using OTLP)
receivers: [otlp]
# Specify processors for the traces pipeline (using batch processor)
processors: [batch]
# Specify exporters for the traces pipeline (using debug exporter)
# This will print the traces to stdout. Replace with exporter that is used in your cluster
exporters: [debug]
# Set up the metrics pipeline
metrics:
# Specify receivers for the metrics pipeline (using OTLP)
receivers: [otlp]
# Specify exporters for the metrics pipeline (using debug exporter)
# This will print the metrics to stdout. Replace with exporter that is used in your cluster
exporters: [debug]
# Specify processors for the metrics pipeline (using batch processor)
processors: [batch]
# Set up the logs pipeline
logs:
# Specify receivers for the logs pipeline (using OTLP)
receivers: [otlp]
# Specify processors for the logs pipeline (using batch processor)
processors: [batch]
# Specify exporters for the logs pipeline (using debug exporter)
# This will print the metrics to stdout. Replace with exporter that is used in your cluster
exporters: [debug]