ChatPaper.aiChatPaper

語言模型的線上體驗式學習

Online Experiential Learning for Language Models

March 17, 2026
作者: Tianzhu Ye, Li Dong, Qingxiu Dong, Xun Wu, Shaohan Huang, Furu Wei
cs.AI

摘要

當前改進大型語言模型的主流範式依賴於離線訓練,需藉助人工標註或模擬環境,導致模型在實際部署中積累的豐富經驗完全未被利用。我們提出線上體驗式學習(OEL)框架,使語言模型能從自身部署經驗中持續改進。OEL分兩階段運作:首先,從用戶端收集的互動軌跡中提取並積累可遷移的體驗知識;其次,透過策略上下文蒸餾將這些知識固化到模型參數中,此過程無需訪問用戶端環境。兩個階段迭代形成線上學習循環,改進後的模型能收集更高質量的軌跡,從而為後續輪次提供更豐富的體驗知識。我們在多個模型規模及思考型與非思考型變體上,基於文本遊戲環境評估OEL。結果顯示,OEL在連續迭代中實現穩定提升,不僅增強任務準確性與標記效率,同時保持分佈外性能。進一步分析表明,提取的體驗知識遠比原始軌跡有效,且知識源與策略模型間的策略一致性對有效學習至關重要。
English
The prevailing paradigm for improving large language models relies on offline training with human annotations or simulated environments, leaving the rich experience accumulated during real-world deployment entirely unexploited. We propose Online Experiential Learning (OEL), a framework that enables language models to continuously improve from their own deployment experience. OEL operates in two stages: first, transferable experiential knowledge is extracted and accumulated from interaction trajectories collected on the user side; second, this knowledge is consolidated into model parameters via on-policy context distillation, requiring no access to the user-side environment. The two stages are iterated to form an online learning loop, where the improved model collects higher-quality trajectories that yield richer experiential knowledge for subsequent rounds. We evaluate OEL on text-based game environments across multiple model scales and both thinking and non-thinking variants. OEL achieves consistent improvements over successive iterations, enhancing both task accuracy and token efficiency while preserving out-of-distribution performance. Our analysis further shows that extracted experiential knowledge is significantly more effective than raw trajectories, and that on-policy consistency between the knowledge source and the policy model is critical for effective learning.
PDF433March 19, 2026