TTCS:面向自我演進的測試時課程合成
TTCS: Test-Time Curriculum Synthesis for Self-Evolving
January 30, 2026
作者: Chengyi Yang, Zhishang Xiang, Yunbo Tang, Zongpei Teng, Chengsong Huang, Fei Long, Yuhan Liu, Jinsong Su
cs.AI
摘要
測試時訓練提供了一種極具前景的方法,僅通過測試問題即可適配模型,從而提升大型語言模型的推理能力。然而,現有方法在處理困難推理問題時面臨雙重挑戰:原始測試問題往往難度過高難以產生高質量的偽標籤,且測試集規模有限導致持續線上更新容易不穩定。為解決這些侷限性,我們提出TTCS——一個協同演化的測試時訓練框架。具體而言,TTCS從同一預訓練模型初始化兩種策略:問題生成器與推理求解器。這兩種策略通過迭代優化實現共同演化:生成器根據測試問題生成逐步增難的題目變體,為求解器當前能力量身定制結構化課程;而求解器則通過在原始測試題與合成題上採樣多個回答計算自洽獎勵來更新自身。關鍵在於,求解器的回饋會引導生成器產生與模型當前能力匹配的題目,而生成的題目變體反過來穩定求解器的測試時訓練。實驗表明,TTCS能持續增強模型在挑戰性數學基準上的推理能力,並可遷移至不同LLM骨幹的通用領域任務,為實現自演化的動態測試課程構建開闢了可擴展路徑。我們的程式碼與實作細節已公開於https://github.com/XMUDeepLIT/TTCS。
English
Test-Time Training offers a promising way to improve the reasoning ability of large language models (LLMs) by adapting the model using only the test questions. However, existing methods struggle with difficult reasoning problems for two reasons: raw test questions are often too difficult to yield high-quality pseudo-labels, and the limited size of test sets makes continuous online updates prone to instability. To address these limitations, we propose TTCS, a co-evolving test-time training framework. Specifically, TTCS initializes two policies from the same pretrained model: a question synthesizer and a reasoning solver. These policies evolve through iterative optimization: the synthesizer generates progressively challenging question variants conditioned on the test questions, creating a structured curriculum tailored to the solver's current capability, while the solver updates itself using self-consistency rewards computed from multiple sampled responses on both original test and synthetic questions. Crucially, the solver's feedback guides the synthesizer to generate questions aligned with the model's current capability, and the generated question variants in turn stabilize the solver's test-time training. Experiments show that TTCS consistently strengthens the reasoning ability on challenging mathematical benchmarks and transfers to general-domain tasks across different LLM backbones, highlighting a scalable path towards dynamically constructing test-time curricula for self-evolving. Our code and implementation details are available at https://github.com/XMUDeepLIT/TTCS.