KV Cache Offloading
KV Cache Offloading
Dynamo supports multiple KV cache offloading backends for vLLM, allowing you to extend effective KV cache capacity beyond GPU memory using CPU RAM and disk storage. Each backend integrates through vLLM’s connector interface and works with both aggregated and disaggregated serving.
KVBM
KVBM (KV Block Manager) is Dynamo’s built-in KV cache offloading system. It provides a three-layer architecture (LLM runtime, logical block management, NIXL transport) with support for CPU and disk cache tiers, and integrates natively with Dynamo’s KV-aware routing and disaggregated serving.
For configuration details, see the KVBM Guide.
LMCache
LMCache is an open-source KV cache engine that provides prefill-once, reuse-everywhere caching with multi-level storage backends (CPU RAM, local storage, Redis, GDS, InfiniStore/Mooncake).
For configuration details, see the LMCache Integration Guide.
FlexKV
FlexKV is a scalable, distributed KV cache runtime developed by Tencent Cloud’s TACO team. It supports multi-level caching (GPU, CPU, SSD), distributed KV cache reuse across nodes, and high-performance I/O via io_uring and GPUDirect Storage.
For configuration details, see the FlexKV Integration Guide.
See Also
- KVBM Design: Architecture and design of Dynamo’s built-in KV cache offloading
- KV-Aware Routing: Routing requests based on KV cache state
- Disaggregated Serving: Prefill/decode separation architecture