Event Exporter

View as Markdown

Overview

The Event Exporter streams health events from NVSentinel’s datastore to external systems, transforming them into CloudEvents format for integration with enterprise monitoring, analytics, and alerting platforms.

Think of it as a data bridge - it takes health events generated by NVSentinel and delivers them to your external systems for centralized visibility, long-term storage, or integration with existing incident management workflows.

Why Do You Need This?

While NVSentinel handles automated remediation within the cluster, you often need to:

  • Centralized monitoring: Aggregate events from multiple clusters into a single pane of glass
  • Long-term analytics: Store events in data warehouses for trend analysis and reporting
  • Integration: Feed events into existing incident management, ticketing, or alerting systems
  • Compliance: Meet audit and compliance requirements for event logging
  • Multi-cluster visibility: Track GPU health across your entire infrastructure

The Event Exporter enables these use cases by streaming events to your external systems in real-time using industry-standard CloudEvents format.

How It Works

The Event Exporter runs as a deployment in the cluster:

  1. Watches the datastore for new health events using change streams
  2. On first startup (if enabled), backfills historical events from the past N days
  3. Transforms health events into CloudEvents format with custom metadata
  4. Publishes events to configured HTTP endpoint with OIDC authentication
  5. Tracks progress using resume tokens for reliable delivery
  6. Retries failed publishes with exponential backoff

The exporter maintains at-least-once delivery semantics by persisting resume tokens, ensuring no events are lost even if the exporter restarts.

Configuration

Configure the Event Exporter through Helm values:

1event-exporter:
2 enabled: true
3
4 # OIDC secret (must be created manually)
5 oidcSecretName: "event-exporter-oidc-secret"
6
7 exporter:
8 # Metadata included with every event
9 metadata:
10 cluster: "production-us-west"
11 environment: "production"
12 region: "us-west-2"
13
14 # Destination endpoint
15 sink:
16 endpoint: "https://events.example.com/api/v1/events"
17 timeout: "30s"
18 insecureSkipVerify: false
19
20 # OIDC authentication
21 oidc:
22 tokenUrl: "https://auth.example.com/oauth2/token"
23 clientId: "nvsentinel-exporter"
24 scope: "events:write"
25 insecureSkipVerify: false
26
27 # Historical event backfill
28 backfill:
29 enabled: true
30 maxAge: "720h" # 30 days
31 maxEvents: 1000000
32 batchSize: 500
33 rateLimit: 1000 # events/second
34
35 # Concurrent publish workers
36 workers: 10 # See scale-up guide below
37
38 # Failure handling
39 failureHandling:
40 maxRetries: 17 # ~30 minutes
41 initialBackoff: "1s"
42 maxBackoff: "5m"
43 backoffMultiplier: 2.0

Configuration Options

  • Metadata: Custom key-value pairs included with every event (cluster name is required)
  • Sink Endpoint: HTTP/HTTPS URL where CloudEvents are posted
  • OIDC Authentication: OAuth2 client credentials for endpoint authentication
  • Backfill: On first startup, optionally export historical events (disabled after initial run)
  • Workers: Number of concurrent goroutines that publish events to the sink in parallel. Resume tokens are advanced in strict order regardless of which worker finishes first, preserving at-least-once delivery guarantees. Note that concurrent publishing means events may arrive at the sink out of order. Default is 10.
  • Retry Policy: Exponential backoff configuration for failed publishes

CloudEvents Format

Events are transformed into CloudEvents v1.0 format:

1{
2 "specversion": "1.0",
3 "type": "com.nvidia.nvsentinel.health.v1",
4 "source": "nvsentinel://production-us-west/healthevents",
5 "id": "550e8400-e29b-41d4-a716-446655440000",
6 "time": "2025-11-27T10:30:00Z",
7 "data": {
8 "metadata": {
9 "cluster": "production-us-west",
10 "environment": "production",
11 "region": "us-west-2"
12 },
13 "healthEvent": {
14 "version": "v1",
15 "agent": "gpu-health-monitor",
16 "componentClass": "GPU",
17 "checkName": "XIDError",
18 "nodeName": "gpu-node-01",
19 "message": "GPU XID error detected",
20 "isFatal": true,
21 "isHealthy": false,
22 "recommendedAction": 2,
23 "errorCode": ["XID_79"],
24 "entitiesImpacted": [
25 {
26 "entityType": "GPU",
27 "entityValue": "GPU-abc123"
28 }
29 ],
30 "generatedTimestamp": "2025-11-27T10:30:00Z"
31 }
32 }
33}

Key Features

CloudEvents Standard

Uses industry-standard CloudEvents v1.0 format for broad compatibility with event processing platforms.

Historical Backfill

On first deployment, optionally exports up to N days of historical events for complete visibility.

Resume Token Tracking

Persists progress in the datastore to ensure at-least-once delivery - no events lost on restart.

OIDC Authentication

Supports OAuth2 client credentials flow with automatic token refresh for secure authentication.

Exponential Backoff

Retries failed publishes with configurable exponential backoff (up to ~30 minutes by default).

Custom Metadata

Enriches every event with custom metadata (cluster, environment, region, etc.) for filtering and routing.

Rate Limiting

Configurable rate limiting for backfill to avoid overwhelming destination systems.

Concurrent Workers

Publishes events in parallel using a configurable worker pool. A sequence tracker ensures resume tokens advance in strict order regardless of which worker finishes first, preserving at-least-once delivery guarantees. Note that concurrent publishing means events may arrive at the sink out of order. See the configuration reference for sizing guidance.

Change Stream Based

Uses datastore change streams for real-time event delivery with minimal latency.