In modern LLM applications like RAG and Agents, the model is constantly fed new context. For example, in RAG, we retrieve relevant documents and stuff them into the prompt.

The issue is that this dynamically retrieved context doesn’t always appear at the beginning of the input sequence. Traditional KV caching only reuses a “common prefix,” so if the new information isn’t at the very start, the cache hit rate plummets, and your GPU ends up recomputing the same things over and over.

CacheBlend changes the game by allowing for the reuse of pre-computed KV caches regardless of their position in the input sequence.

This makes it possible to achieve a 100% KV Cache hit rate in applications like RAG. The performance gains are significant:

  • Faster Time-To-First-Token (TTFT): Get your initial response much quicker.
  • More Throughput: Serve significantly more users with the same hardware.
  • Almost lossless Output Quality: All of this is achieved with little degradation in the model’s generation quality.

CacheBlend works by intelligently handling the two main challenges of reusing non-prefix caches:

  • Positional Encoding Update: It efficiently updates positional encodings to ensure the model always knows the correct position of each token, even when we’re stitching together cached and new data.
  • Selective Attention Recalculation: Instead of recomputing everything, it strategically recalculates only the minimal cross-attention needed between the new and cached chunks to maintain perfect generation quality.

An interactive CacheBlend demo is available at: https://github.com/LMCache/LMCache-Examples/tree/main/demo-rag-blending