ChatPaper.aiChatPaper

從泛化到專精:探索基礎模型中的測試時訓練機制

Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models

September 29, 2025
作者: Jonas Hübotter, Patrik Wolf, Alexander Shevchenko, Dennis Jüni, Andreas Krause, Gil Kur
cs.AI

摘要

近期的實證研究探討了在測試時繼續針對特定任務訓練模型的想法,即所謂的測試時訓練(TTT),並發現這種方法能帶來顯著的性能提升。然而,對於TTT為何以及何時有效,目前的理解仍有限。早期的解釋主要集中於觀察到TTT在應用於分佈外適應或使用特權數據時可能有所幫助。然而,隨著基礎模型規模的擴大,大多數測試數據屬於分佈內,這對這些解釋提出了質疑。我們則認為,基礎模型在全局上仍然處於欠參數化狀態,TTT提供了一種在泛化後進行專門化的機制,將模型能力集中於與測試任務相關的概念上。具體而言,在線性表示假設下,我們提出了一個模型,其中TTT實現了比全局訓練更小的分佈內測試誤差。我們通過在ImageNet上訓練稀疏自編碼器來實證驗證我們模型的關鍵假設,顯示語義相關的數據點僅由少數共享概念解釋。最後,我們在圖像和語言任務上進行了規模化研究,確認了我們模型的實際意義,並識別了專門化最有效的場景。
English
Recent empirical studies have explored the idea of continuing to train a model at test-time for a given task, known as test-time training (TTT), and have found it to yield significant performance improvements. However, there is limited understanding of why and when TTT is effective. Earlier explanations mostly focused on the observation that TTT may help when applied to out-of-distribution adaptation or used with privileged data. However, the growing scale of foundation models with most test data being in-distribution questions these explanations. We instead posit that foundation models remain globally underparameterized, with TTT providing a mechanism for specialization after generalization, focusing capacity on concepts relevant to the test task. Specifically, under the linear representation hypothesis, we propose a model in which TTT achieves a substantially smaller in-distribution test error than global training. We empirically validate our model's key assumptions by training a sparse autoencoder on ImageNet, showing that semantically related data points are explained by only a few shared concepts. Finally, we perform scaling studies across image and language tasks that confirm the practical implications of our model, identifying the regimes where specialization is most effective.
PDF01October 1, 2025