ChatPaper.aiChatPaper

Control-R:迈向可控的测试时缩放

Control-R: Towards controllable test-time scaling

May 30, 2025
作者: Di Zhang, Weida Wang, Junxian Li, Xunzhi Wang, Jiatong Li, Jianbo Wu, Jingdi Lei, Haonan He, Peng Ye, Shufei Zhang, Wanli Ouyang, Yuqiang Li, Dongzhan Zhou
cs.AI

摘要

本文旨在解决大型推理模型(LRMs)在长链思维(CoT)推理中存在的欠思考与过度思考问题,提出了一种新颖的推理控制场(RCF)方法——一种在测试时通过注入结构化控制信号,从树搜索视角引导推理的策略。RCF使模型在解决复杂任务时,能够依据给定的控制条件灵活调整推理力度。此外,我们推出了Control-R-4K数据集,该数据集包含标注有详细推理过程及相应控制场的挑战性问题。为进一步强化推理控制,我们提出了条件蒸馏微调(CDF)方法,专门训练模型——特别是Control-R-32B——以在测试时有效调节推理力度。在AIME2024和MATH500等基准测试上的实验结果表明,我们的方法在32B规模上实现了最先进的性能,同时支持可控的长链思维推理过程(L-CoT)。总体而言,本研究为可控的测试时扩展推理引入了一种高效范式。
English
This paper target in addressing the challenges of underthinking and overthinking in long chain-of-thought (CoT) reasoning for Large Reasoning Models (LRMs) by introducing Reasoning Control Fields (RCF)--a novel test-time approach that injects structured control signals to guide reasoning from a tree search perspective. RCF enables models to adjust reasoning effort according to given control conditions when solving complex tasks. Additionally, we present the Control-R-4K dataset, which consists of challenging problems annotated with detailed reasoning processes and corresponding control fields. To further enhance reasoning control, we propose a Conditional Distillation Finetuning (CDF) method, which trains model--particularly Control-R-32B--to effectively adjust reasoning effort during test time. Experimental results on benchmarks such as AIME2024 and MATH500 demonstrate that our approach achieves state-of-the-art performance at the 32B scale while enabling a controllable Long CoT reasoning process (L-CoT). Overall, this work introduces an effective paradigm for controllable test-time scaling reasoning.
PDF22June 5, 2025