ChatPaper.aiChatPaper

VideoWorld:從未標記的影片中探索知識學習

VideoWorld: Exploring Knowledge Learning from Unlabeled Videos

January 16, 2025
作者: Zhongwei Ren, Yunchao Wei, Xun Guo, Yao Zhao, Bingyi Kang, Jiashi Feng, Xiaojie Jin
cs.AI

摘要

本研究探討深度生成模型是否能僅從視覺輸入中學習複雜知識,與目前主要關注大型語言模型(LLMs)等基於文本的模型形成對比。我們開發了VideoWorld,這是一個自回歸視頻生成模型,使用未標記的視頻數據進行訓練,並在基於視頻的圍棋和機器人控制任務中測試其知識獲取能力。我們的實驗揭示了兩個關鍵發現:(1)僅通過視頻訓練就能提供足夠的信息來學習知識,包括規則、推理和規劃能力;(2)視覺變化的表示對知識獲取至關重要。為了提高這一過程的效率和功效,我們引入了潛在動態模型(LDM)作為VideoWorld的關鍵組件。值得注意的是,VideoWorld在Video-GoBench中僅使用3億參數模型就達到了5段職業水準,而無需依賴於強化學習中典型的搜索算法或獎勵機制。在機器人任務中,VideoWorld有效地學習了各種控制操作,並在不同環境中實現泛化,接近了CALVIN和RLBench中神諭模型的性能。本研究為從視覺數據中獲取知識開辟了新的途徑,並將所有代碼、數據和模型開源供進一步研究使用。
English
This work explores whether a deep generative model can learn complex knowledge solely from visual input, in contrast to the prevalent focus on text-based models like large language models (LLMs). We develop VideoWorld, an auto-regressive video generation model trained on unlabeled video data, and test its knowledge acquisition abilities in video-based Go and robotic control tasks. Our experiments reveal two key findings: (1) video-only training provides sufficient information for learning knowledge, including rules, reasoning and planning capabilities, and (2) the representation of visual change is crucial for knowledge acquisition. To improve both the efficiency and efficacy of this process, we introduce the Latent Dynamics Model (LDM) as a key component of VideoWorld. Remarkably, VideoWorld reaches a 5-dan professional level in the Video-GoBench with just a 300-million-parameter model, without relying on search algorithms or reward mechanisms typical in reinforcement learning. In robotic tasks, VideoWorld effectively learns diverse control operations and generalizes across environments, approaching the performance of oracle models in CALVIN and RLBench. This study opens new avenues for knowledge acquisition from visual data, with all code, data, and models open-sourced for further research.

Summary

AI-Generated Summary

PDF292January 21, 2025