Action100M:大规模视频动作数据集
Action100M: A Large-scale Video Action Dataset
January 15, 2026
作者: Delong Chen, Tejaswi Kasarla, Yejin Bang, Mustafa Shukor, Willy Chung, Jade Yu, Allen Bolourchi, Theo Moutakanni, Pascale Fung
cs.AI
摘要
从视觉观察中推断物理行为是推动机器智能在物理世界中发展的核心能力。实现这一目标需要涵盖广泛领域的大规模开放词汇视频动作数据集。我们推出Action100M数据集——该大规模数据集从120万条互联网教学视频(总时长14.6年)构建而成,生成约1亿个具有开放词汇动作标注和丰富文本描述的时间定位片段。Action100M通过全自动流程生成,该流程(i)使用V-JEPA 2嵌入进行分层时间分割,(ii)生成组织为"描述树"的多层级帧与片段描述,(iii)通过推理模型(GPT-OSS-120B)在多轮自优化程序下聚合证据,输出结构化标注(简要/详细动作、行为主体、简要/详细描述)。在Action100M上训练VL-JEPA模型显示出持续的数据规模效益,并在多样化的动作识别基准测试中表现出强大的零样本性能,这使Action100M成为视频理解与世界建模领域可扩展研究的新基石。
English
Inferring physical actions from visual observations is a fundamental capability for advancing machine intelligence in the physical world. Achieving this requires large-scale, open-vocabulary video action datasets that span broad domains. We introduce Action100M, a large-scale dataset constructed from 1.2M Internet instructional videos (14.6 years of duration), yielding O(100 million) temporally localized segments with open-vocabulary action supervision and rich captions. Action100M is generated by a fully automated pipeline that (i) performs hierarchical temporal segmentation using V-JEPA 2 embeddings, (ii) produces multi-level frame and segment captions organized as a Tree-of-Captions, and (iii) aggregates evidence with a reasoning model (GPT-OSS-120B) under a multi-round Self-Refine procedure to output structured annotations (brief/detailed action, actor, brief/detailed caption). Training VL-JEPA on Action100M demonstrates consistent data-scaling improvements and strong zero-shot performance across diverse action recognition benchmarks, establishing Action100M as a new foundation for scalable research in video understanding and world modeling.