长链思维监督微调中,数据重复优于数据扩展
Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-Tuning
February 11, 2026
作者: Dawid J. Kopiczko, Sagar Vaze, Tijmen Blankevoort, Yuki M. Asano
cs.AI
摘要
思维链数据上的监督微调是推理语言模型训练后阶段的关键步骤。传统机器学习直觉认为,使用更多独特训练样本能获得更好泛化能力。然而我们发现反直觉现象:在固定更新预算下,对较小数据集进行多轮训练的效果优于对大规模数据的单轮训练。在AIME'24/25和GPQA基准测试中,Olmo3-7B模型在400个样本上训练128轮的表现比同等条件下51200个样本单轮训练高出12-26个百分点,且未出现额外灾难性遗忘。研究发现,训练标记准确率可可靠指示重复训练的饱和点;当达到完全记忆后,增加训练轮次的收益会趋于平稳,该模式在所有设置中保持一致。这些发现为推理任务的监督微调提供了实用方案——通过扩展训练轮次并以标记准确率作为停止标准,可替代昂贵的无定向数据扩增。我们提出"重复优势"这一新开放性问题:完全记忆与泛化能力提升同步发生的现象,亟待学界深入探索大语言模型的训练动力学机制。
English
Supervised fine-tuning (SFT) on chain-of-thought data is an essential post-training step for reasoning language models. Standard machine learning intuition suggests that training with more unique training samples yields better generalization. Counterintuitively, we show that SFT benefits from repetition: under a fixed update budget, training for more epochs on smaller datasets outperforms single-epoch training on larger datasets. On AIME'24/25 and GPQA benchmarks, Olmo3-7B trained for 128 epochs on 400 samples outperforms the equivalent 1 epoch on 51200 samples by 12-26 percentage points, with no additional catastrophic forgetting. We find that training token accuracy reliably signals when repetition has saturated; improvements from additional epochs plateau at full memorization, a pattern consistent across all settings. These findings provide a practical approach for reasoning SFT, where scaling epochs with token accuracy as a stopping criterion can replace expensive undirected data scaling. We pose the repetition advantage, where full memorization coincides with improved generalization, as a new open problem for the community in understanding the training dynamics of large language models.