Control-R:邁向可控的測試時縮放
Control-R: Towards controllable test-time scaling
May 30, 2025
作者: Di Zhang, Weida Wang, Junxian Li, Xunzhi Wang, Jiatong Li, Jianbo Wu, Jingdi Lei, Haonan He, Peng Ye, Shufei Zhang, Wanli Ouyang, Yuqiang Li, Dongzhan Zhou
cs.AI
摘要
本文旨在解決大型推理模型(LRMs)在長鏈思維(CoT)推理中面臨的思維不足與過度思維的挑戰,提出了一種新穎的測試時方法——推理控制場(RCF),該方法從樹搜索的角度注入結構化控制信號來引導推理過程。RCF使模型在解決複雜任務時能夠根據給定的控制條件調整推理力度。此外,我們介紹了Control-R-4K數據集,該數據集包含帶有詳細推理過程及相應控制場的挑戰性問題。為了進一步增強推理控制,我們提出了一種條件蒸餾微調(CDF)方法,專門訓練模型——特別是Control-R-32B——以在測試時有效調整推理力度。在AIME2024和MATH500等基準測試上的實驗結果表明,我們的方法在32B規模上達到了最先進的性能,同時實現了可控的長鏈思維推理過程(L-CoT)。總體而言,這項工作為可控的測試時規模化推理引入了一個有效的範式。
English
This paper target in addressing the challenges of underthinking and
overthinking in long chain-of-thought (CoT) reasoning for Large Reasoning
Models (LRMs) by introducing Reasoning Control Fields (RCF)--a novel test-time
approach that injects structured control signals to guide reasoning from a tree
search perspective. RCF enables models to adjust reasoning effort according to
given control conditions when solving complex tasks. Additionally, we present
the Control-R-4K dataset, which consists of challenging problems annotated with
detailed reasoning processes and corresponding control fields. To further
enhance reasoning control, we propose a Conditional Distillation Finetuning
(CDF) method, which trains model--particularly Control-R-32B--to effectively
adjust reasoning effort during test time. Experimental results on benchmarks
such as AIME2024 and MATH500 demonstrate that our approach achieves
state-of-the-art performance at the 32B scale while enabling a controllable
Long CoT reasoning process (L-CoT). Overall, this work introduces an effective
paradigm for controllable test-time scaling reasoning.