程式推理之谷:大規模語言模型知識蒸餾的擴展之路
The Valley of Code Reasoning: Scaling Knowledge Distillation of Large Language Models
October 7, 2025
作者: Muyu He, Muhammad Ali Shafique, Anand Kumar, Tsach Mackey, Nazneen Rajani
cs.AI
摘要
將具有推理能力的大型語言模型(LLM)的思維軌跡蒸餾到較小的模型中已被證明是有效的。然而,關於模型性能如何隨蒸餾數據量的增加而變化的研究卻相對匱乏。在本研究中,我們探討了在兩個小型非推理LLM上蒸餾競賽編程技能的擴展趨勢。我們驗證了一個假設,即存在一個代碼推理的谷底:隨著數據量的增加,競賽編程的下游性能首先下降,然後以比對數線性更快的速度穩步上升。在識別出這一趨勢後,我們進一步在相同的數據上對模型進行了兩個不同蒸餾階段的微調,以基於它們各自的學習階段得出結論。我們發現,在低數據量和中低數據量範圍內,小型模型從較簡單的編碼問題中獲益顯著多於從較難的問題中獲益。我們還發現,令人驚訝的是,訓練數據中輸出的正確性對蒸餾結果並無影響。我們的工作代表了在直覺之外理解代碼推理蒸餾訓練動態方面邁出的一步。
English
Distilling the thinking traces of a Large Language Model (LLM) with reasoning
capabilities into a smaller model has been proven effective. Yet, there is a
scarcity of work done on how model performances scale with the quantity of
distillation data. In this work, we study the scaling trend of distilling
competitive coding skills on two small non-reasoning LLMs. We validate the
hypothesis that there is a valley of code reasoning: downstream
performance on competitive coding first drops as data quantity increases, then
it steadily increases in a sharper-than-log-linear fashion. Having identified
the trend, we further fine-tune the models at two different distillation stages
on the same data to ground conclusions on their respective learning phases. We
learn that across stages in the low and medium-low data regimes, small models
benefit significantly from easier coding questions than from harder ones. We
also find that, surprisingly, the correctness of outputs in training data makes
no difference to distillation outcomes. Our work represents a step forward in
understanding the training dynamics of code reasoning distillation outside
intuition