EgoVLPv2:基于主体的视频-语言预训练与融合
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone
July 11, 2023
作者: Shraman Pramanick, Yale Song, Sayan Nag, Kevin Qinghong Lin, Hardik Shah, Mike Zheng Shou, Rama Chellappa, Pengchuan Zhang
cs.AI
摘要
由于视频语言预训练(VLP)能够泛化到各种视觉和语言任务,因此变得越来越重要。然而,现有的自我中心VLP框架利用独立的视频和语言编码器,在微调期间仅学习特定于任务的跨模态信息,限制了统一系统的发展。在这项工作中,我们介绍了第二代自我中心视频语言预训练(EgoVLPv2),这是对上一代的显著改进,通过直接将跨模态融合纳入视频和语言骨干结构。EgoVLPv2在预训练期间学习强大的视频文本表示,并重复使用跨模态注意力模块,以灵活高效的方式支持不同的下游任务,降低微调成本。此外,我们提出的骨干融合策略比堆叠额外的融合特定层更轻量级和计算高效。在广泛的VL任务上进行的大量实验表明,EgoVLPv2通过在所有下游任务上实现一致的最先进性能,超过强基线,展现了其有效性。我们的项目页面位于https://shramanpramanick.github.io/EgoVLPv2/。
English
Video-language pre-training (VLP) has become increasingly important due to
its ability to generalize to various vision and language tasks. However,
existing egocentric VLP frameworks utilize separate video and language encoders
and learn task-specific cross-modal information only during fine-tuning,
limiting the development of a unified system. In this work, we introduce the
second generation of egocentric video-language pre-training (EgoVLPv2), a
significant improvement from the previous generation, by incorporating
cross-modal fusion directly into the video and language backbones. EgoVLPv2
learns strong video-text representation during pre-training and reuses the
cross-modal attention modules to support different downstream tasks in a
flexible and efficient manner, reducing fine-tuning costs. Moreover, our
proposed fusion in the backbone strategy is more lightweight and
compute-efficient than stacking additional fusion-specific layers. Extensive
experiments on a wide range of VL tasks demonstrate the effectiveness of
EgoVLPv2 by achieving consistent state-of-the-art performance over strong
baselines across all downstream. Our project page can be found at
https://shramanpramanick.github.io/EgoVLPv2/.