ChatPaper.aiChatPaper

V-JEPA 2.1:解锁视频自监督学习中的密集特征提取能力

V-JEPA 2.1: Unlocking Dense Features in Video Self-Supervised Learning

March 15, 2026
作者: Lorenzo Mur-Labadia, Matthew Muckley, Amir Bar, Mido Assran, Koustuv Sinha, Mike Rabbat, Yann LeCun, Nicolas Ballas, Adrien Bardes
cs.AI

摘要

我们推出V-JEPA 2.1系列自监督模型,该模型能同时学习图像与视频的密集高质量视觉表征,并保持强大的全局场景理解能力。该方法融合了四大核心要素:首先,密集预测损失采用基于掩码的学习目标,使可见与掩码标记共同参与训练信号生成,强化空间与时间维度的显式关联;其次,深度自监督机制将自监督目标分层应用于编码器的多个中间层,提升表征质量;第三,多模态分词器实现图像与视频的统一训练;最后,模型通过模型容量与训练数据的有效扩展获得性能提升。这些设计共同造就了空间结构清晰、语义连贯且时间一致性强的视觉表征。 实证研究表明,V-JEPA 2.1在多项挑战性基准测试中实现突破性表现:在Ego4D短期物体交互预测任务中达到7.71 mAP,在EPIC-KITCHENS高层动作预测任务中取得40.8的Recall@5,真实机器人抓取成功率较V-JEPA-2 AC提升20个百分点。该模型还在机器人导航(TartanDrive数据集5.687 ATE)、深度估计(NYUv2数据集线性探测0.307 RMSE)及全局识别(Something-Something-V2数据集77.7分)任务中展现卓越性能。这些成果表明V-JEPA 2.1在密集视觉理解与世界建模领域显著推动了技术前沿的发展。
English
We present V-JEPA 2.1, a family of self-supervised models that learn dense, high-quality visual representations for both images and videos while retaining strong global scene understanding. The approach combines four key components. First, a dense predictive loss uses a masking-based objective in which both visible and masked tokens contribute to the training signal, encouraging explicit spatial and temporal grounding. Second, deep self-supervision applies the self-supervised objective hierarchically across multiple intermediate encoder layers to improve representation quality. Third, multi-modal tokenizers enable unified training across images and videos. Finally, the model benefits from effective scaling in both model capacity and training data. Together, these design choices produce representations that are spatially structured, semantically coherent, and temporally consistent. Empirically, V-JEPA 2.1 achieves state-of-the-art performance on several challenging benchmarks, including 7.71 mAP on Ego4D for short-term object-interaction anticipation and 40.8 Recall@5 on EPIC-KITCHENS for high-level action anticipation, as well as a 20-point improvement in real-robot grasping success rate over V-JEPA-2 AC. The model also demonstrates strong performance in robotic navigation (5.687 ATE on TartanDrive), depth estimation (0.307 RMSE on NYUv2 with a linear probe), and global recognition (77.7 on Something-Something-V2). These results show that V-JEPA 2.1 significantly advances the state of the art in dense visual understanding and world modeling.
PDF82March 20, 2026