ChatPaper.aiChatPaper

SRFT:一種結合監督與強化微調的單階段推理方法

SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning

June 24, 2025
作者: Yuqian Fu, Tinghong Chen, Jiajun Chai, Xihuai Wang, Songjun Tu, Guojun Yin, Wei Lin, Qichao Zhang, Yuanheng Zhu, Dongbin Zhao
cs.AI

摘要

大型語言模型(LLMs)在推理任務中取得了顯著進展,然而監督微調(SFT)與強化學習(RL)的最佳整合仍是一個根本性挑戰。通過從基於熵的角度對詞元分佈、學習動態及整合機制進行全面分析,我們揭示了這些範式之間的關鍵差異:SFT引發了LLM策略分佈的粗粒度全局變化,而RL則進行細粒度的選擇性優化,其中熵作為訓練效果的重要指標。基於這些觀察,我們提出了監督強化微調(SRFT),這是一種單階段方法,通過熵感知加權機制統一了兩種微調範式。我們的方法同時應用SFT和RL,直接利用示範和自我探索的展開來優化LLM,而非通過兩階段的序列方法。大量實驗表明,SRFT在五個數學推理基準上達到了59.1%的平均準確率,比零RL方法高出9.0%,並在三個分佈外基準上高出10.9%。
English
Large language models (LLMs) have achieved remarkable progress in reasoning tasks, yet the optimal integration of Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) remains a fundamental challenge. Through comprehensive analysis of token distributions, learning dynamics, and integration mechanisms from entropy-based perspectives, we reveal key differences between these paradigms: SFT induces coarse-grained global changes to LLM policy distributions, while RL performs fine-grained selective optimizations, with entropy serving as a critical indicator of training effectiveness. Building on these observations, we propose Supervised Reinforcement Fine-Tuning (SRFT), a single-stage method that unifies both fine-tuning paradigms through entropy-aware weighting mechanisms. Our approach simultaneously applies SFT and RL to directly optimize the LLM using demonstrations and self-exploration rollouts rather than through two-stage sequential methods. Extensive experiments show that SRFT achieves 59.1% average accuracy, outperforming zero-RL methods by 9.0% on five mathematical reasoning benchmarks and 10.9% on three out-of-distribution benchmarks.
PDF91June 25, 2025