NV-CLIP (Latest)
NV-CLIP (Latest)

Observability

NV-CLIP NIM supports exporting metrics and traces in an OpenTelemetry-compatible format.

Additionally, the underlying Triton service exposes its own metrics through a Prometheus endpoint.

To collect these metrics and traces, export them to a running OpenTelemetry Collector instance, which can then export them to any OTLP-compatible backend.

You can collect metrics from both the ${embedding_long_name} container and underlying Triton instance.

Service Metrics

To enable metrics exporting from the NIM web service, set the NIM_OTEL_SERVICE_NAME, NIM_OTEL_METRICS_EXPORTER and NIM_OTEL_EXPORTER_OTLP_ENDPOINT environment variables when launching the NV-CLIP NIM container.

Triton Metrics

Triton exposes its metrics on port 8002 in Prometheus format. To collect these metrics, use a Prometheus receiver to scrape the Triton endpoint and export them in an OpenTelemetry compatible format. See the following example for details.

To enable exporting traces from the NIM web service, set the NIM_OTEL_SERVICE_NAME, NIM_OTEL_TRACES_EXPORTER and NIM_OTEL_EXPORTER_OTLP_ENDPOINT` environment variables when launching the NV-CLIP NIM container.

Example

The following example requires that an instance of the OpenTelemetry Collector is running at <opentelemetry-collector-endpoint> on port <opentelemetry-collector-port>.

Launching the NIM Container with OpenTelemetry Enabled

Copy
Copied!
            

# Choose a container name for bookkeeping export NIM_MODEL_NAME=nvidia/nvclip-vit-h-14 export CONTAINER_NAME=$(basename $NIM_MODEL_NAME) # Choose a NIM Image from NGC export IMG_NAME="nvcr.io/nim/$NIM_MODEL_NAME:1.0.0" # Set the OTEL environment variables to enable metrics exporting export NIM_OTEL_SERVICE_NAME=$CONTAINER_NAME export NIM_OTEL_METRICS_EXPORTER=otlp export NIM_OTEL_TRACES_EXPORTER=otlp export NIM_OTEL_EXPORTER_OTLP_ENDPOINT="http://<opentelemetry-collector-endpoint>:<opentelemetry-collector-port>" docker run -it --rm --name=$CONTAINER_NAME \ ... \ -e NIM_OTEL_SERVICE_NAME \ -e NIM_OTEL_METRICS_EXPORTER \ -e NIM_OTEL_TRACES_EXPORTER \ -e NIM_OTEL_EXPORTER_OTLP_ENDPOINT \ ... \ $IMG_NAME

Receiving and Exporting Telemetry Data with the OpenTelemetry Collector

The following OpenTelemetry Collector configuration enables both metrics and tracing exports.

Two receivers are defined:

  • The OTLP receiver is capable of receiving both metrics and trace data from the NIM.

  • A Prometheus receiver is used for scraping Triton’s own metrics.

Three exporters are defined:

  • An OTLP exporter to export to a downstream collector or backend. For example, Datadog.

  • A debug exporter which prints received data to the console. This is useful for testing and development purposes.

Traces are configured to be received exclusively by the OTLP receiver, and exported by the debug exporters. Metrics are configured to be received by both the OTLP and Prometheus receivers, and exported by the OTLP and debug exporters.

Copy
Copied!
            

receivers: otlp: protocols: grpc: http: cors: allowed_origins: - "*" prometheus: config: scrape_configs: - job_name: nim-triton-metrics scrape_interval: 10s static_configs: - targets: ["<nim-endpoint>":8002"] exporters: debug: verbosity: detailed service: pipelines: traces: receivers: [otlp] exporters: [debug] metrics: receivers: [otlp, prometheus] exporters: [debug]

Previous Performance
Next Acknowledgements
© Copyright © 2024, NVIDIA Corporation. Last updated on Oct 3, 2024.