ChatPaper.aiChatPaper

奥德修斯:通过强化学习将视觉语言模型扩展至游戏中的百轮以上决策

Odysseus: Scaling VLMs to 100+ Turn Decision-Making in Games via Reinforcement Learning

May 1, 2026
作者: Chengshuai Shi, Wenzhe Li, Xinran Liang, Yizhou Lu, Wenjia Yang, Ruirong Feng, Seth Karten, Ziran Yang, Zihan Ding, Gabriel Sarch, Danqi Chen, Karthik Narasimhan, Chi Jin
cs.AI

摘要

随着视觉语言模型(VLM)能力的快速提升,将其扩展至视频游戏等交互式决策任务已成为新兴研究方向。然而,现有方法要么依赖对人类操作轨迹的大规模监督微调,要么仅在较短决策跨度(通常为20-30步)中应用强化学习。本研究探索基于强化学习的VLM训练方法,使其能够在《超级马里奥大陆》这一需要100+步交互、兼顾感知推理与动作协调的视觉化环境中实现长跨度决策。我们首先系统性分析了关键算法组件,提出配备轻量级步序评判器的PPO改进版本,相比GRPO和Reinforce++等无评判器方法,显著提升了训练稳定性与样本效率。研究进一步表明,预训练VLM能提供强动作先验,相较于从零开始训练的经典深度强化学习方法,不仅大幅提升RL训练时的样本效率,还降低了动作工程等人工设计需求。基于这些发现,我们推出Odysseus——一个开放的VLM智能体训练框架,在游戏多关卡中实现显著进展,平均通关进度达到前沿模型的3倍以上。训练后的模型在游戏内与跨游戏泛化场景下均表现出一致的性能提升,同时保持通用领域能力。本研究揭示了在长跨度多模态场景中实现稳定高效强化学习的关键要素,为开发具身化VLM智能体提供了实践指导。
English
Given the rapidly growing capabilities of vision-language models (VLMs), extending them to interactive decision-making tasks such as video games has emerged as a promising frontier. However, existing approaches either rely on large-scale supervised fine-tuning (SFT) on human trajectories or apply reinforcement learning (RL) only in relatively short-horizon settings (typically around 20--30 turns). In this work, we study RL-based training of VLMs for long-horizon decision-making in Super Mario Land, a visually grounded environment requiring 100+ turns of interaction with coordinated perception, reasoning, and action. We begin with a systematic investigation of key algorithmic components and propose an adapted variant of PPO with a lightweight turn-level critic, which substantially improves training stability and sample efficiency over critic-free methods such as GRPO and Reinforce++. We further show that pretrained VLMs provide strong action priors, significantly improving sample efficiency during RL training and reducing the need for manual design choices such as action engineering, compared to classical deep RL trained from scratch. Building on these insights, we introduce Odysseus, an open training framework for VLM agents, achieving substantial gains across multiple levels of the game and at least 3 times average game progresses than frontier models. Moreover, the trained models exhibit consistent improvements under both in-game and cross-game generalization settings, while maintaining general-domain capabilities. Overall, our results identify key ingredients for making RL stable and effective in long-horizon, multi-modal settings, and provide practical guidance for developing VLMs as embodied agents.
PDF41May 5, 2026