Light-IF:通过预览与自检机制赋予大语言模型复杂指令跟随的泛化推理能力
Light-IF: Endowing LLMs with Generalizable Reasoning via Preview and Self-Checking for Complex Instruction Following
August 5, 2025
作者: Chenyang Wang, Liang Wen, Shousheng Jia, Xiangzheng Zhang, Liang Xu
cs.AI
摘要
尽管大语言模型(LLMs)在推理能力上的进步显著提升了其在解决数学问题、编程任务及一般谜题中的表现,但在准确遵循指令方面,尤其是在处理更为复杂的指令时,其效果仍不稳定。我们的研究发现,在思考阶段出现的“惰性推理”是导致指令遵循不佳的主要原因。为缓解这一问题,我们提出了一套全面的框架,旨在通过引入预览与自我检查的严格推理过程,确保满足严格的指令约束。具体而言,我们首先生成带有复杂约束的指令,并通过筛选过程获取有效提示,从而构建了分别标记为困难、简单及通过的三个不同提示数据集。随后,我们对通过提示进行拒绝采样,精选出一个小规模但高质量的数据集,以此实现模型的冷启动初始化,并促进其适应有效的推理模式。紧接着,我们采用了一种保持熵的监督微调策略(Entropy-SFT),结合基于规则的密集奖励引导的逐词熵自适应强化学习(TEA-RL),激励模型转变其推理机制,最终培养出包含预览与自我检查在内的可泛化推理能力。在指令遵循基准测试上的大量实验表明,该方法在不同规模的模型上均实现了显著的性能提升。尤为突出的是,我们的Light-IF-32B模型不仅超越了如DeepSeek-R1等更大的开源模型,还超越了如Doubao-1.6等闭源模型。
English
While advancements in the reasoning abilities of LLMs have significantly
enhanced their performance in solving mathematical problems, coding tasks, and
general puzzles, their effectiveness in accurately adhering to instructions
remains inconsistent, particularly with more complex directives. Our
investigation identifies lazy reasoning during the thinking stage as the
primary factor contributing to poor instruction adherence. To mitigate this
issue, we propose a comprehensive framework designed to enable rigorous
reasoning processes involving preview and self-checking, essential for
satisfying strict instruction constraints. Specifically, we first generate
instructions with complex constraints and apply a filtering process to obtain
valid prompts, resulting in three distinct prompt datasets categorized as hard,
easy, and pass. Then, we employ rejection sampling on the pass prompts to
curate a small yet high-quality dataset, enabling a cold-start initialization
of the model and facilitating its adaptation to effective reasoning patterns.
Subsequently, we employ an entropy-preserving supervised fine-tuning
(Entropy-SFT) strategy coupled with token-wise entropy-adaptive (TEA-RL)
reinforcement learning guided by rule-based dense rewards. This approach
encourages the model to transform its reasoning mechanism, ultimately fostering
generalizable reasoning abilities that encompass preview and self-checking.
Extensive experiments conducted on instruction-following benchmarks demonstrate
remarkable performance improvements across various model scales. Notably, our
Light-IF-32B model surpasses both larger open-source models such as DeepSeek-R1
and closed-source models like Doubao-1.6.