大规模预训练用于基于视觉的视频描述生成
Large-scale Pre-training for Grounded Video Caption Generation
March 13, 2025
作者: Evangelos Kazakos, Cordelia Schmid, Josef Sivic
cs.AI
摘要
我们提出了一种新颖的视频字幕生成与物体定位方法,通过时间密集的边界框将字幕中的物体在视频中精准定位。我们的主要贡献如下:首先,我们介绍了一种大规模自动标注技术,该方法将带有边界框的字幕信息从单帧聚合为时间上密集且一致的边界框标注。我们将此方法应用于HowTo100M数据集,构建了一个名为HowToGround1M的大规模预训练数据集。同时,我们提出了一个名为GROVE的基于视频的字幕生成模型,并在HowToGround1M上进行了预训练。其次,我们引入了一个新数据集iGround,包含3500个视频,配有手工标注的字幕及密集的时空定位边界框,这不仅为我们衡量这一挑战性问题的进展提供了基准,也使我们能够在此高质量小规模数据上微调模型。第三,我们展示了与多个基线模型相比,我们的方法在提出的iGround数据集上达到了最先进的性能,同时在VidSTG和ActivityNet-Entities数据集上也表现优异。通过大量消融实验,我们验证了使用自动标注的HowToGround1M数据集进行预训练,再在手工标注的iGround数据集上微调的重要性,并确认了我们模型关键技术贡献的有效性。
English
We propose a novel approach for captioning and object grounding in video,
where the objects in the caption are grounded in the video via temporally dense
bounding boxes. We introduce the following contributions. First, we present a
large-scale automatic annotation method that aggregates captions grounded with
bounding boxes across individual frames into temporally dense and consistent
bounding box annotations. We apply this approach on the HowTo100M dataset to
construct a large-scale pre-training dataset, named HowToGround1M. We also
introduce a Grounded Video Caption Generation model, dubbed GROVE, and
pre-train the model on HowToGround1M. Second, we introduce a new dataset,
called iGround, of 3500 videos with manually annotated captions and dense
spatio-temporally grounded bounding boxes. This allows us to measure progress
on this challenging problem, as well as to fine-tune our model on this
small-scale but high-quality data. Third, we demonstrate that our approach
achieves state-of-the-art results on the proposed iGround dataset compared to a
number of baselines, as well as on the VidSTG and ActivityNet-Entities
datasets. We perform extensive ablations that demonstrate the importance of
pre-training using our automatically annotated HowToGround1M dataset followed
by fine-tuning on the manually annotated iGround dataset and validate the key
technical contributions of our model.Summary
AI-Generated Summary