ChatPaper.aiChatPaper

神灯:生成交互环境

Genie: Generative Interactive Environments

February 23, 2024
作者: Jake Bruce, Michael Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Bechtle, Feryal Behbahani, Stephanie Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando de Freitas, Satinder Singh, Tim Rocktäschel
cs.AI

摘要

我们介绍了Genie,这是第一个通过无监督方式从未标记的互联网视频中训练的生成式交互环境。该模型可以被提示生成通过文本、合成图像、照片甚至草图描述的无穷多种可操作动作的虚拟世界。拥有110亿参数的Genie可以被视为基础世界模型。它由时空视频标记器、自回归动力学模型以及简单且可扩展的潜在动作模型组成。Genie使用户能够在生成的环境中逐帧操作,尽管在训练过程中没有任何地面真实动作标签或其他通常在世界模型文献中找到的领域特定要求。此外,所得到的学习潜在动作空间有助于训练代理程序模仿来自未见视频的行为,为未来训练通用代理程序打开了道路。
English
We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.
PDF737December 15, 2024