ChatPaper.aiChatPaper

超越“顿悟”:迈向大型推理模型中的系统性元能力对齐

Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models

May 15, 2025
作者: Zhiyuan Hu, Yibo Wang, Hanze Dong, Yuhui Xu, Amrita Saha, Caiming Xiong, Bryan Hooi, Junnan Li
cs.AI

摘要

大型推理模型(LRMs)已具备潜在的长链思维推理能力。先前研究表明,基于结果的强化学习(RL)能够偶然引发高级推理行为,如自我修正、回溯及验证现象,这些常被喻为模型的“顿悟时刻”。然而,这些涌现行为的时机与一致性仍难以预测和控制,制约了LRMs推理能力的可扩展性与可靠性。为克服这些局限,我们不再依赖提示与偶然的“顿悟时刻”,而是通过自动生成、可自我验证的任务,明确地将模型与三大元能力——演绎、归纳与溯因——对齐。我们的三阶段流程包括个体对齐、参数空间融合及领域特定强化学习,相较于指令调优基线,性能提升超过10%。此外,从对齐检查点出发的领域特定RL在数学、编程及科学基准测试中平均带来额外2%的性能上限提升,表明明确的元能力对齐为推理提供了可扩展且可靠的基础。代码已发布于:https://github.com/zhiyuanhubj/Meta-Ability-Alignment。
English
Large reasoning models (LRMs) already possess a latent capacity for long chain-of-thought reasoning. Prior work has shown that outcome-based reinforcement learning (RL) can incidentally elicit advanced reasoning behaviors such as self-correction, backtracking, and verification phenomena often referred to as the model's "aha moment". However, the timing and consistency of these emergent behaviors remain unpredictable and uncontrollable, limiting the scalability and reliability of LRMs' reasoning capabilities. To address these limitations, we move beyond reliance on prompts and coincidental "aha moments". Instead, we explicitly align models with three meta-abilities: deduction, induction, and abduction, using automatically generated, self-verifiable tasks. Our three stage-pipeline individual alignment, parameter-space merging, and domain-specific reinforcement learning, boosting performance by over 10\% relative to instruction-tuned baselines. Furthermore, domain-specific RL from the aligned checkpoint yields an additional 2\% average gain in the performance ceiling across math, coding, and science benchmarks, demonstrating that explicit meta-ability alignment offers a scalable and dependable foundation for reasoning. Code is available at: https://github.com/zhiyuanhubj/Meta-Ability-Alignment

Summary

AI-Generated Summary

PDF773May 16, 2025