ChatPaper.aiChatPaper

具有VectorQ的自適應語義提示緩存

Adaptive Semantic Prompt Caching with VectorQ

February 6, 2025
作者: Luis Gaspar Schroeder, Shu Liu, Alejandro Cuadron, Mark Zhao, Stephan Krusche, Alfons Kemper, Matei Zaharia, Joseph E. Gonzalez
cs.AI

摘要

語義提示快取通過重複使用具有語義相似性提示的快取的LLM生成回應,從而降低大型語言模型(LLM)推理的延遲和成本。向量相似度度量為嵌入式提示與其在快取中最接近的鄰居之間的相似性分配一個數值分數以量化。現有系統依賴於靜態閾值來分類相似性分數是否足夠高以導致快取命中。我們表明,這種一刀切的閾值在不同提示之間是不夠的。我們提出VectorQ,一個學習嵌入式特定閾值區域並適應嵌入複雜性和不確定性的框架。通過對四個不同數據集的組合進行評估,我們表明VectorQ在所有靜態閾值上始終優於最先進的系統,實現了高達12倍的快取命中率增加和錯誤率降低高達92%。
English
Semantic prompt caches reduce the latency and cost of large language model (LLM) inference by reusing cached LLM-generated responses for semantically similar prompts. Vector similarity metrics assign a numerical score to quantify the similarity between an embedded prompt and its nearest neighbor in the cache. Existing systems rely on a static threshold to classify whether the similarity score is sufficiently high to result in a cache hit. We show that this one-size-fits-all threshold is insufficient across different prompts. We propose VectorQ, a framework to learn embedding-specific threshold regions that adapt to the complexity and uncertainty of an embedding. Through evaluations on a combination of four diverse datasets, we show that VectorQ consistently outperforms state-of-the-art systems across all static thresholds, achieving up to 12x increases in cache hit rate and error rate reductions up to 92%.

Summary

AI-Generated Summary

PDF32February 11, 2025