ChatPaper.aiChatPaper

视觉-语言模型是用于强化学习的零-shot奖励模型。

Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning

October 19, 2023
作者: Juan Rocamonde, Victoriano Montesinos, Elvis Nava, Ethan Perez, David Lindner
cs.AI

摘要

强化学习(RL)要么需要手动指定奖励函数,这通常是不可行的,要么需要从大量人类反馈中学习奖励模型,这往往非常昂贵。我们研究了一种更加样本高效的替代方案:使用预训练的视觉语言模型(VLMs)作为零样本奖励模型(RMs),通过自然语言指定任务。我们提出了一种自然且通用的使用VLMs作为奖励模型的方法,我们称之为VLM-RMs。我们使用基于CLIP的VLM-RMs来训练MuJoCo仿真人学习复杂任务,而无需手动指定奖励函数,比如跪下、劈叉和盘腿坐等任务。对于这些任务中的每一个,我们仅提供一个描述所需任务的单个句子文本提示,而且最小限度地进行提示工程。我们在以下网址提供了训练代理的视频:https://sites.google.com/view/vlm-rm。通过提供第二个“基准”提示并投影出与区分目标和基准无关的CLIP嵌入空间的部分,我们可以提高性能。此外,我们发现VLM-RMs存在强烈的扩展效应:使用更多计算资源和数据训练的更大型VLMs是更好的奖励模型。我们遇到的VLM-RMs的失败模式都与当前VLMs已知的能力限制相关,比如有限的空间推理能力或视觉上不真实的远离VLM分布的环境。我们发现只要VLM足够大,VLM-RMs就非常稳健。这表明未来的VLMs将变得越来越有用,可以成为各种RL应用的奖励模型。
English
Reinforcement learning (RL) requires either manually specifying a reward function, which is often infeasible, or learning a reward model from a large amount of human feedback, which is often very expensive. We study a more sample-efficient alternative: using pretrained vision-language models (VLMs) as zero-shot reward models (RMs) to specify tasks via natural language. We propose a natural and general approach to using VLMs as reward models, which we call VLM-RMs. We use VLM-RMs based on CLIP to train a MuJoCo humanoid to learn complex tasks without a manually specified reward function, such as kneeling, doing the splits, and sitting in a lotus position. For each of these tasks, we only provide a single sentence text prompt describing the desired task with minimal prompt engineering. We provide videos of the trained agents at: https://sites.google.com/view/vlm-rm. We can improve performance by providing a second ``baseline'' prompt and projecting out parts of the CLIP embedding space irrelevant to distinguish between goal and baseline. Further, we find a strong scaling effect for VLM-RMs: larger VLMs trained with more compute and data are better reward models. The failure modes of VLM-RMs we encountered are all related to known capability limitations of current VLMs, such as limited spatial reasoning ability or visually unrealistic environments that are far off-distribution for the VLM. We find that VLM-RMs are remarkably robust as long as the VLM is large enough. This suggests that future VLMs will become more and more useful reward models for a wide range of RL applications.
PDF201December 15, 2024