TL;DR:太長了,重新加權以實現高效的大型語言模型推理壓縮
TL;DR: Too Long, Do Re-weighting for Effcient LLM Reasoning Compression
June 3, 2025
作者: Zhong-Zhi Li, Xiao Liang, Zihao Tang, Lei Ji, Peijie Wang, Haotian Xu, Xing W, Haizhen Huang, Weiwei Deng, Ying Nian Wu, Yeyun Gong, Zhijiang Guo, Xiao Liu, Fei Yin, Cheng-Lin Liu
cs.AI
摘要
大型語言模型(LLMs)近期通過強化學習和擴展的思維鏈(CoT)技術取得了顯著進展。然而,在進行高效語言推理——特別是在生成極長輸出時的推理——這一挑戰已引起研究界越來越多的關注。在本研究中,我們提出了一種基於動態比例的訓練流程,該流程不依賴於複雜的數據註釋或多個模型之間的插值。我們持續平衡模型系統1和系統2數據之間的權重,以消除冗餘的推理過程,同時保留模型的推理能力。我們在DeepSeek-R1-Distill-7B和DeepSeek-R1-Distill-14B模型上以及一系列難度各異的基準測試中驗證了我們的方法。我們的方法在保持推理準確性的同時,顯著減少了近40%的輸出標記數量。我們的代碼和數據將很快公開。
English
Large Language Models (LLMs) have recently achieved remarkable progress by
leveraging Reinforcement Learning and extended Chain-of-Thought (CoT)
techniques. However, the challenge of performing efficient language
reasoning--especially during inference with extremely long outputs--has drawn
increasing attention from the research community. In this work, we propose a
dynamic ratio-based training pipeline that does not rely on sophisticated data
annotations or interpolation between multiple models. We continuously balance
the weights between the model's System-1 and System-2 data to eliminate
redundant reasoning processes while preserving the model's reasoning
capability. We validate our approach across models on DeepSeek-R1-Distill-7B
and DeepSeek-R1-Distill-14B and on a diverse set of benchmarks with varying
difficulty levels. Our method significantly reduces the number of output tokens
by nearly 40% while maintaining the accuracy of the reasoning. Our code and
data will be available soon.