KV Cache Routing#
This document explains how Dynamo’s Key-Value (KV) cache routing optimizes large language model inference by intelligently directing requests to workers with the most relevant cached data, while maintaining load balance through worker utilization metrics.
To enable KV cache aware routing start the frontend node like this:
python -m dynamo.frontend --router-mode kv
When KV blocks are created or removed, the engine notifies the Dynamo router, which then identifies the worker with the best matching blocks and routes traffic accordingly.
To evaluate the benefits of KV-aware routing, compare your workload’s performance using --router-mode random|round-robin
against KV-aware routing.
The main KV-aware routing arguments:
--kv-overlap-score-weight
: Controls the importance of prefix cache overlaps in prefill cost calculations. Higher values improve Time To First Token (TTFT) at the cost of Inter-Token Latency (ITL). When set to 0, the router ignores prefix caches and uses pure load balancing. Defaults to 1.--router-temperature
: Controls worker selection randomness through softmax sampling of router cost logits. A value of 0 (default) ensures deterministic selection of the lowest-cost worker, while higher values introduce more randomness.--no-kv-events
: Disables KV event tracking. By default (when this flag is not provided), the router usesKvIndexer
to monitor block creation and deletion events. When disabled with this flag, usesApproxKvIndexer
, which estimates cache hits based on a fixed time window (120s). Use this flag if your backend doesn’t support KV events (or you are not confident in the accuracy or responsiveness of the events).--router-replica-sync
: Disabled by default. Enables NATS-based synchronization of local routing decisions between router replicas. When enabled, routers share their active sequence information and local predictions of block usage, improving routing consistency across instances. Note that this does not sync the radix tree or cached KV block states themselves - those are synchronized through JetStream events--router-reset-states
: When specified, resets the router state on startup by clearing both the JetStream event stream and NATS object store, starting with a fresh state. By default (when this flag is not provided), the router persists state across restarts, downloading any available snapshot from NATS object store and continuing to consume events from where it left off. This enables routers to maintain KV cache awareness across restarts. Warning: Using--router-reset-states
can bring existing router replicas into an inconsistent state. Only use this flag when launching the first router replica in a component, or consider using a different namespace/component for a clean slate.--router-snapshot-threshold
: Sets the number of messages in the JetStream before triggering a snapshot. When the message count exceeds this threshold, a router will attempt to purge acknowledged messages from the stream and create a snapshot of the current radix tree state in NATs object store. Defaults to 10000. This helps manage stream size and provides faster initialization for routers that restart.
Note
State persistence is only available when KV events are enabled (default). When using --no-kv-events
with ApproxKvIndexer
, state persistence is not currently supported.
Architecture#
Colloquially, we refer to a Dynamo component that serves an endpoint for LLM inference as a worker.
Basic Routing#
Dynamo supports several routing strategies when sending requests from one component to another component’s endpoint.
First, we must create a client tied to a components endpoint, we can do this using the labels defined above. Here we are getting a client tied to the generate
endpoint of the VllmWorker
component.
client = namespace('dynamo').component('VllmWorker').endpoint('generate').client()
We can then use the default routing methods exposed by the client class to send requests to the VllmWorker
component.
Random routing: Default strategy, available via
client.generate()
orclient.random()
Round-robin routing: Cycles through available workers via
client.round_robin()
Direct routing: Explicitly targets a specific worker via
client.direct(input, component_id)
KV Cache routing uses direct routing with a special worker selection algorithm.
Serving Multiple Router Replicas#
For improved fault tolerance, you can launch multiple frontend + router replicas. Since the frontend and router are currently tied together, you’ll need to use different HTTP ports for each instance. (The separation of the frontend and Router is WIP.)
Router State Management#
The KV Router tracks two types of state (see KV Router Architecture for details):
Prefix blocks (cached KV blocks): Maintained in a radix tree, tracking which blocks are cached on each worker. This state is persistent - backed by NATS JetStream events and object store snapshots. New router replicas automatically sync this state on startup, ensuring consistent cache awareness across restarts.
Active blocks (decoding blocks): Tracks blocks currently being used for active generation requests. This state is ephemeral - when a new router replica starts, it begins with zero active block knowledge but becomes eventually consistent as it handles requests.
Enabling Router Replica Synchronization#
# Router replica 1
python -m dynamo.frontend --router-mode kv --port 8000 --router-replica-sync
# Router replica 2 (can be started later)
python -m dynamo.frontend --router-mode kv --port 8001 --router-replica-sync
The --router-replica-sync
flag enables active block synchronization between replicas:
Active blocks are shared via NATS core messaging (fire-and-forget)
Replicas exchange routing decisions to maintain consistent load estimates
A new replica start with zero active blocks but quickly converge through request handling, by itself and active syncing with other replicas
Without this flag, each replica maintains its own isolated view of active blocks, potentially leading to suboptimal routing.
Persistence and Recovery#
Prefix blocks persist by default:
Stored in NATS JetStream with 1-hour retention
Snapshots saved to NATS object store at configurable thresholds
New replicas automatically restore this state on startup
You can a launch a third Router replica even if the first two Router replicas are down, and it will recover the full prefix state. (As mentioned above, the tracking of active blocks will not persist, but will become eventually consistent through request handling.)
python -m dynamo.frontend --router-mode kv --port 8002 --router-replica-sync
Note
If you need to start with a fresh state, you have two options:
Recommended: Use a different namespace/component (see Distributed Runtime) which will start a new stream and NATS object store path
Use with caution: Launch a router with the
--router-reset-states
flag, which will purge the entire stream and radix snapshot. This should only be done when launching the first router replica in a component, as it can bring existing router replicas into an inconsistent state.
Understanding KV Cache#
The leading Large Language Models (LLMs) today are auto-regressive and based off of the transformer architecture. One key inference optimization technique is to cache the already computed keys and values and to reuse them for the future tokens. This is called the KV Cache.
KV Cache Optimizations#
Every inference framework will have a KV Cache for each worker. A popular inference framework library is vLLM where a key contribution was PagedAttention, which allowed them to manage KV Cache in an efficient way by chunking requests into blocks.
Another popular inference framework, SGLang, contributed RadixAttention which introduced a prefix tree which allows for efficient matching, inserting and eviction of KV Cache blocks. The prefix tree structure popularized KV Cache reuse.
In Dynamo, we introduce a KVPublisher which emits KV Cache events that occur at each worker and a KVIndexer which keeps track of these events globally.
To get a feel for how KV Cache management works on a single worker with KV Cache reuse turned on and where the KVPublisher gets plugged in, we can walk through the KV Block management flow:
Request tokenization: The incoming prompt is converted into tokens
Block partitioning: The token sequence is divided into fixed-size blocks (e.g., 16 or 64 tokens per block)
Block hashing: Each block of tokens is hashed to create a unique identifier
Cache lookup:
For each block, the system checks if a matching block already exists in the KV cache
If a match is found, the existing KV cache block is reused
If no match is found, the system proceeds to the next step
Resource allocation:
For blocks without matches, the system attempts to allocate new memory space
If sufficient memory is available, allocate memory space and proceed to step 7
If memory is constrained, proceed to step 6
Cache eviction (when necessary):
The system applies an eviction policy (e.g., LRU, LFU) to identify blocks for removal
Selected blocks are evicted from the cache
KVPublisher emits a KV removed event notifying KVIndexer about the removed block.
Alternatively, some systems may offload less-frequently used blocks to CPU memory.
KV computation:
For new blocks, the model computes key and value tensors
These tensors are stored in the newly allocated cache blocks
KVPublisher emits a kv stored event notifying KVIndexer about newly stored blocks.
KV Cache Routing and Load Balancing#
+---------+ +------------------+ +---------+
| Tokens |--------->| KV Aware Router |---------> | Worker 2|
+---------+ +------------------+ +---------+
|
+------------------+------------------+
| | |
| Cached: 2 blocks | Cached: 5 blocks | Cached: 8 blocks
| Prefill: 8 blks | Prefill: 5 blks | Prefill: 2 blks
| Decode: 10 blks | Decode: 5 blks | Decode: 9 blks
v v v
+----------------+ +----------------+ +----------------+
| Worker 1 | | Worker 2 | | Worker 3 |
+----------------+ +----------------+ +----------------+
KV Cache reuse introduces complexity to LLM serving load balancing. While it can significantly reduce computation costs, routing strategies that ignore worker-specific KV states can lead to:
Missed cache reuse opportunities due to suboptimal worker selection
System throughput degradation from uneven request distribution across workers
The router uses a cost function that considers both the prefill cost (influenced by cached blocks) and the decode load to make optimal routing decisions:
Cost Calculation#
Prefill blocks: Calculated by dividing the number of tokens requiring prefill processing by the block size. The system predicts this based on input tokens and available cached blocks per worker, updating the count when the first output token signals prefill completion.
Decode blocks: Estimated from the request’s input tokens and each worker’s active sequences. The count updates when requests complete and their blocks are freed.
Cost formula:
cost = overlap_score_weight * prefill_blocks + decode_blocks
Lower costs indicate better routing choices
overlap_score_weight
balances cache hit optimization against load distributionHigher weights favor cache reuse (improving TTFT), while lower weights prioritize even load distribution (improving ITL)
Worker Selection#
The router selects the worker with the lowest cost. When router_temperature
is set to a non-zero value, the router uses softmax sampling on the normalized cost logits to introduce randomness in the selection, which can help with load distribution.
Example calculation with overlap_score_weight = 1.0
:
Worker 1: cost = 1.0 * 8 + 10 = 18
Worker 2: cost = 1.0 * 5 + 5 = 10 (selected - lowest cost)
Worker 3: cost = 1.0 * 2 + 9 = 11
Events#
Dynamo supports KV Cache Routing across multiple backend implementations through a flexible event system. The KVPublisher component integrates with any framework to emit KV events, while the KVIndexer component maintains a global prefix tree of cached blocks by processing these events from all workers.
+----------------+ +-----------------+
| | | KV Aware Router |
| Worker | | |
| | create_kv_block() | +-------------+ |
| +------------+ | remove_kv_block() | | KVIndexer | |
| |KVPublisher | |------------------------>| +-------------+ |
| +------------+ | | |
| | | |
+----------------+ +-----------------+
KVPublisher#
The KVPublisher can be initialized and then called in the inference framework where blocks are allocated and removed.
The two types of events are:
KV stored event
KV removed event
The publisher can be initialized and used through C bindings or Python bindings.
KVIndexer#
The KVIndexer builds and maintains a global view of cached blocks in a prefix tree. We modify the original prefix tree by also storing the worker id on each node. This is so we can return the number of matched blocks for each worker.
The KVIndexer has a method find_matches_for_request
, which takes in tokens and returns a dictionary with keys of worker id and values of the number of matched KV Blocks.
Inter-Router Communication#
In distributed deployments with multiple routers, each router maintains visibility over only a portion of the total requests. To ensure consistent routing decisions, routers synchronize their states through three event types:
AddRequest: Notifies other routers when a request is assigned to a worker. Includes request ID, worker ID, token sequence blocks, and overlap score to track block usage across the system.
MarkPrefillCompleted: Signals when a request moves from prefill to decode phase, allowing routers to update their worker load calculations by excluding completed prefill tokens.
Free: Indicates request completion and resource release, enabling accurate block reference counting across all routers.
Each event carries a unique router ID to prevent self-event processing. This asynchronous communication system ensures optimal routing decisions by maintaining consistent KV cache state across all routers, even as they handle different request streams.
Event Persistence and Recovery#
KV cache events are persisted in NATS JetStream, allowing router replicas to maintain their global view of KV blocks across restarts. By default, routers persist their state - they download any available snapshot from NATS object store and continue consuming events from their last acknowledged position in the stream. This default behavior ensures KV cache awareness is maintained across router restarts without any additional configuration.
To manage stream growth, when the message count exceeds --router-snapshot-threshold
, a router acquires an etcd-based distributed lock, purges acknowledged messages from the stream, and uploads the current radix tree state to NATS object store. This snapshot serves as a checkpoint for faster initialization of future router instances.
Using KvPushRouter Python API#
Instead of launching the KV Router via command line, you can create a KvPushRouter
object directly in Python. This allows per-request routing configuration overrides.
Setup#
First, launch your backend engines:
python -m dynamo.vllm --model meta-llama/Llama-2-7b-hf --endpoint dyn://inference.vllm.generate
Example Script#
import asyncio
from dynamo._core import DistributedRuntime, KvPushRouter, KvRouterConfig
async def main():
# Get runtime and create endpoint
runtime = DistributedRuntime.detached()
namespace = runtime.namespace("inference")
component = namespace.component("vllm")
endpoint = component.endpoint("generate")
# Create KV router
kv_router_config = KvRouterConfig()
router = KvPushRouter(
endpoint=endpoint,
block_size=16,
kv_router_config=kv_router_config
)
# Your input tokens
token_ids = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# Generate with per-request routing override
stream = await router.generate(
token_ids=token_ids,
model="meta-llama/Llama-2-7b-hf",
stop_conditions={
"max_tokens": 20, # Generate exactly 20 tokens
"ignore_eos": True, # Don't stop at EOS token
},
sampling_options={
"temperature": 0.7,
"top_p": 0.9,
},
router_config_override={
"overlap_score_weight": 2.0, # Prioritize cache hits for this request
"router_temperature": 0.5, # Add routing randomness
}
)
# Collect generated tokens
generated_tokens = []
async for response in stream:
if isinstance(response, dict) and "token_ids" in response:
generated_tokens.extend(response["token_ids"])
print(f"Generated {len(generated_tokens)} tokens: {generated_tokens}")
if __name__ == "__main__":
asyncio.run(main())
The router_config_override
parameter allows you to adjust routing behavior per request without recreating the router. This is useful for implementing different routing strategies based on request characteristics.