简而言之:过长?重加权以实现高效大语言模型推理压缩
TL;DR: Too Long, Do Re-weighting for Effcient LLM Reasoning Compression
June 3, 2025
作者: Zhong-Zhi Li, Xiao Liang, Zihao Tang, Lei Ji, Peijie Wang, Haotian Xu, Xing W, Haizhen Huang, Weiwei Deng, Ying Nian Wu, Yeyun Gong, Zhijiang Guo, Xiao Liu, Fei Yin, Cheng-Lin Liu
cs.AI
摘要
大型语言模型(LLMs)近期通过强化学习和扩展的思维链(CoT)技术取得了显著进展。然而,在执行高效语言推理——尤其是在生成极长输出的推理过程中——所面临的挑战,已引起研究界越来越多的关注。在本研究中,我们提出了一种动态比例训练流程,该流程不依赖于复杂的数据标注或多模型间的插值。我们持续平衡模型System-1与System-2数据之间的权重,以消除冗余的推理过程,同时保持模型的推理能力。我们在DeepSeek-R1-Distill-7B和DeepSeek-R1-Distill-14B模型上,以及一系列难度各异的基准测试中验证了我们的方法。结果表明,我们的方法在保持推理准确性的同时,显著减少了近40%的输出token数量。我们的代码和数据即将公开。
English
Large Language Models (LLMs) have recently achieved remarkable progress by
leveraging Reinforcement Learning and extended Chain-of-Thought (CoT)
techniques. However, the challenge of performing efficient language
reasoning--especially during inference with extremely long outputs--has drawn
increasing attention from the research community. In this work, we propose a
dynamic ratio-based training pipeline that does not rely on sophisticated data
annotations or interpolation between multiple models. We continuously balance
the weights between the model's System-1 and System-2 data to eliminate
redundant reasoning processes while preserving the model's reasoning
capability. We validate our approach across models on DeepSeek-R1-Distill-7B
and DeepSeek-R1-Distill-14B and on a diverse set of benchmarks with varying
difficulty levels. Our method significantly reduces the number of output tokens
by nearly 40% while maintaining the accuracy of the reasoning. Our code and
data will be available soon.Summary
AI-Generated Summary