|国家预印本平台
首页|ContextCache: Context-Aware Semantic Cache for Multi-Turn Queries in Large Language Models

ContextCache: Context-Aware Semantic Cache for Multi-Turn Queries in Large Language Models

ContextCache: Context-Aware Semantic Cache for Multi-Turn Queries in Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Semantic caching significantly reduces computational costs and improves efficiency by storing and reusing large language model (LLM) responses. However, existing systems rely primarily on matching individual queries, lacking awareness of multi-turn dialogue contexts, which leads to incorrect cache hits when similar queries appear in different conversational settings. This demonstration introduces ContextCache, a context-aware semantic caching system for multi-turn dialogues. ContextCache employs a two-stage retrieval architecture that first executes vector-based retrieval on the current query to identify potential matches and then integrates current and historical dialogue representations through self-attention mechanisms for precise contextual matching. Evaluation of real-world conversations shows that ContextCache improves precision and recall compared to existing methods. Additionally, cached responses exhibit approximately 10 times lower latency than direct LLM invocation, enabling significant computational cost reductions for LLM conversational applications.

Jianxin Yan、Wangze Ni、Lei Chen、Xuemin Lin、Peng Cheng、Zhan Qin、Kui Ren

计算技术、计算机技术

Jianxin Yan,Wangze Ni,Lei Chen,Xuemin Lin,Peng Cheng,Zhan Qin,Kui Ren.ContextCache: Context-Aware Semantic Cache for Multi-Turn Queries in Large Language Models[EB/OL].(2025-07-15)[2025-07-20].https://arxiv.org/abs/2506.22791.点此复制

评论