Fault Tolerance#
Dynamo provides comprehensive fault tolerance mechanisms to ensure reliable LLM inference in production deployments. This section covers the various strategies and features that enable Dynamo to handle failures gracefully and maintain service availability.
Overview#
Fault tolerance in Dynamo operates at multiple levels:
Layer |
Mechanism |
Purpose |
|---|---|---|
Request |
Migration, Cancellation |
Handle in-flight request failures |
Worker |
Health Checks, Graceful Shutdown |
Detect and recover from worker failures |
System |
Load Shedding, Request Rejection |
Prevent system overload |
Infrastructure |
etcd HA, NATS resilience |
Handle infrastructure component failures |
Key Features#
Request Migration#
When a worker fails during request processing, Dynamo can migrate in-progress requests to healthy workers. The migration system:
Preserves partial generation state (accumulated tokens)
Transparently continues generation on a new worker
Maintains seamless token flow to clients
See Request Migration for details.
Request Cancellation#
Dynamo supports canceling in-flight requests to free computational resources:
Graceful stop signals for clean termination
Kill signals for immediate termination
Hierarchical cancellation propagation through request chains
See Request Cancellation for details.
Graceful Shutdown#
Workers handle shutdown signals (SIGTERM/SIGINT) gracefully:
Immediately stop accepting new requests
Optionally drain in-flight requests before terminating
Clean up resources (engines, connections, temp files)
See Graceful Shutdown for details.
Request Rejection (Load Shedding)#
When workers are overloaded, Dynamo rejects new requests to prevent cascading failures:
Configurable busy thresholds based on KV cache utilization
Real-time worker load monitoring
HTTP 503 responses with retry guidance
See Request Rejection for details.
Health Checks#
Dynamo provides multiple health check mechanisms:
HTTP Endpoints:
/healthand/liveendpoints for orchestrationCanary Health Checks: Active monitoring via periodic test requests
Engine Monitoring: Automatic shutdown on engine failure detection
See Health Checks for details.
Configuration Quick Reference#
Feature |
Environment Variable |
Default |
|---|---|---|
Worker health port |
|
|
Canary health checks |
|
|
Canary wait time |
|
|
Health check timeout |
|
|
Decode blocks threshold |
|
None (disabled) |
Prefill tokens threshold |
|
None (disabled) |
Failure Scenarios and Recovery#
Worker Pod Restart#
Worker receives SIGTERM from Kubernetes
Endpoints are immediately invalidated (no new requests)
In-flight requests complete or migrate (based on configuration)
Resources are cleaned up
Pod restarts with fresh state
Worker Crash (Unexpected)#
etcd lease expires (TTL-based detection)
Client discovers endpoint removal via etcd watch
New requests route to remaining healthy workers
In-flight requests on crashed worker are migrated (if enabled)
Network Partition#
Worker loses connectivity to etcd/NATS
Lease keep-alive fails, lease eventually expires
Worker is removed from service discovery
Traffic reroutes to reachable workers
GPU Failure#
Engine health check detects GPU error (XID, OOM, etc.)
Worker initiates graceful shutdown
Runtime is shut down, engine cleaned up
Process exits with code 1 for pod restart
Testing Fault Tolerance#
Dynamo includes a comprehensive testing framework for validating fault tolerance:
Request cancellation tests
Migration tests with worker failures
etcd HA failover tests
Hardware fault injection (GPU XID, network partitions)
See Fault Tolerance Testing for details.