ChatPaper.aiChatPaper

视觉智能再思考:从视频预训练中获得的启示

Rethinking Visual Intelligence: Insights from Video Pretraining

October 28, 2025
作者: Pablo Acuaviva, Aram Davtyan, Mariam Hassan, Sebastian Stapf, Ahmad Rahimi, Alexandre Alahi, Paolo Favaro
cs.AI

摘要

大型语言模型(LLMs)已证明大规模预训练能使系统在语言领域以极少监督快速适应新问题。然而这一成功尚未有效迁移至视觉领域——包括LLMs在内的模型仍在组合理解、样本效率和通用问题解决方面存在不足。我们探究视频扩散模型(VDMs)作为弥合这一差距的潜在方向。时空数据预训练赋予此类模型对结构与动态的强归纳偏置,我们推测这种特性可支撑广泛的任务适应性。为验证此假设,我们设计受控实验:为预训练LLM和预训练VDM分别配备轻量级适配器,使其处理各自模态的本征任务。在ARC-AGI、ConceptARC、视觉游戏、路径规划和元胞自动机等基准测试中,VDMs展现出优于语言模型的数据效率。综合结果表明,视频预训练提供的归纳偏置有望推动视觉基础模型的发展。
English
Large language models (LLMs) have demonstrated that large-scale pretraining enables systems to adapt rapidly to new problems with little supervision in the language domain. This success, however, has not translated as effectively to the visual domain, where models, including LLMs, continue to struggle with compositional understanding, sample efficiency, and general-purpose problem-solving. We investigate Video Diffusion Models (VDMs) as a promising direction for bridging this gap. Pretraining on spatiotemporal data endows these models with strong inductive biases for structure and dynamics, which we hypothesize can support broad task adaptability. To test this, we design a controlled evaluation in which both a pretrained LLM and a pretrained VDM are equipped with lightweight adapters and presented with tasks in their natural modalities. Across benchmarks including ARC-AGI, ConceptARC, visual games, route planning, and cellular automata, VDMs demonstrate higher data efficiency than their language counterparts. Taken together, our results indicate that video pretraining offers inductive biases that support progress toward visual foundation models.
PDF51December 1, 2025