ChatPaper.aiChatPaper

语言模型的在线体验式学习

Online Experiential Learning for Language Models

March 17, 2026
作者: Tianzhu Ye, Li Dong, Qingxiu Dong, Xun Wu, Shaohan Huang, Furu Wei
cs.AI

摘要

当前提升大语言模型的主流范式依赖于基于人工标注或模拟环境的离线训练,完全未能利用现实部署中积累的丰富经验。我们提出在线体验学习框架,使语言模型能够从其自身部署经验中持续改进。该框架通过两个阶段运作:首先从用户端收集的交互轨迹中提取并积累可迁移的体验知识;随后通过基于策略的上下文蒸馏将这些知识固化到模型参数中,整个过程无需访问用户端环境。这两个阶段循环迭代形成在线学习闭环:改进后的模型能收集更高质量的交互轨迹,从而为后续轮次提供更丰富的体验知识。我们在基于文本的游戏环境中对多种模型规模、含思考模块与不含思考模块的变体进行了评估。实验表明,在线体验学习在连续迭代中实现了稳定提升,不仅提高了任务准确率和令牌使用效率,还保持了分布外性能。进一步分析显示,提取的体验知识比原始交互轨迹更有效,且知识源与策略模型之间的策略一致性对有效学习至关重要。
English
The prevailing paradigm for improving large language models relies on offline training with human annotations or simulated environments, leaving the rich experience accumulated during real-world deployment entirely unexploited. We propose Online Experiential Learning (OEL), a framework that enables language models to continuously improve from their own deployment experience. OEL operates in two stages: first, transferable experiential knowledge is extracted and accumulated from interaction trajectories collected on the user side; second, this knowledge is consolidated into model parameters via on-policy context distillation, requiring no access to the user-side environment. The two stages are iterated to form an online learning loop, where the improved model collects higher-quality trajectories that yield richer experiential knowledge for subsequent rounds. We evaluate OEL on text-based game environments across multiple model scales and both thinking and non-thinking variants. OEL achieves consistent improvements over successive iterations, enhancing both task accuracy and token efficiency while preserving out-of-distribution performance. Our analysis further shows that extracted experiential knowledge is significantly more effective than raw trajectories, and that on-policy consistency between the knowledge source and the policy model is critical for effective learning.
PDF433March 19, 2026