LightThinker:逐步思考的壓縮技術
LightThinker: Thinking Step-by-Step Compression
February 21, 2025
作者: Jintian Zhang, Yuqi Zhu, Mengshu Sun, Yujie Luo, Shuofei Qiao, Lun Du, Da Zheng, Huajun Chen, Ningyu Zhang
cs.AI
摘要
大型語言模型(LLMs)在複雜推理任務中展現了卓越的性能,但其效率因生成冗長詞元所伴隨的巨大記憶體和計算成本而受到限制。本文提出了一種新方法——LightThinker,該方法使LLMs能夠在推理過程中動態壓縮中間思維。受人類認知過程的啟發,LightThinker將繁瑣的思維步驟壓縮為緊湊的表示形式,並捨棄原始推理鏈,從而顯著減少存儲在上下文窗口中的詞元數量。這是通過數據構建訓練模型何時及如何執行壓縮、將隱藏狀態映射到精簡的要點詞元,以及創建專門的注意力掩碼來實現的。此外,我們引入了依賴性(Dep)指標,通過測量生成過程中對歷史詞元的依賴程度來量化壓縮程度。在四個數據集和兩個模型上的廣泛實驗表明,LightThinker降低了峰值記憶體使用量和推理時間,同時保持了競爭力的準確性。我們的工作為在不犧牲性能的前提下提高LLMs在複雜推理任務中的效率提供了新的方向。代碼將發佈於https://github.com/zjunlp/LightThinker。
English
Large language models (LLMs) have shown remarkable performance in complex
reasoning tasks, but their efficiency is hindered by the substantial memory and
computational costs associated with generating lengthy tokens. In this paper,
we propose LightThinker, a novel method that enables LLMs to dynamically
compress intermediate thoughts during reasoning. Inspired by human cognitive
processes, LightThinker compresses verbose thought steps into compact
representations and discards the original reasoning chains, thereby
significantly reducing the number of tokens stored in the context window. This
is achieved by training the model on when and how to perform compression
through data construction, mapping hidden states to condensed gist tokens, and
creating specialized attention masks. Additionally, we introduce the Dependency
(Dep) metric to quantify the degree of compression by measuring the reliance on
historical tokens during generation. Extensive experiments on four datasets and
two models show that LightThinker reduces peak memory usage and inference time,
while maintaining competitive accuracy. Our work provides a new direction for
improving the efficiency of LLMs in complex reasoning tasks without sacrificing
performance. Code will be released at https://github.com/zjunlp/LightThinker.Summary
AI-Generated Summary