ChatPaper.aiChatPaper

流式密集视频字幕生成

Streaming Dense Video Captioning

April 1, 2024
作者: Xingyi Zhou, Anurag Arnab, Shyamal Buch, Shen Yan, Austin Myers, Xuehan Xiong, Arsha Nagrani, Cordelia Schmid
cs.AI

摘要

一个理想的密集视频字幕生成模型——即在视频中时间定位并预测字幕——应能处理长视频输入,生成丰富、详细的文本描述,并在处理完整视频之前输出结果。然而,当前最先进的模型仅处理固定数量的下采样帧,并在看完整个视频后进行一次全面预测。我们提出了一种流式密集视频字幕生成模型,该模型包含两个创新组件:首先,我们提出了一种基于聚类输入令牌的新型记忆模块,该模块能处理任意长度的视频,因为记忆模块的大小是固定的。其次,我们开发了一种流式解码算法,使模型能在处理完整视频之前进行预测。我们的模型实现了这一流式处理能力,并在三个密集视频字幕生成基准测试(ActivityNet、YouCook2 和 ViTT)上显著提升了最先进水平。我们的代码已在 https://github.com/google-research/scenic 发布。
English
An ideal model for dense video captioning -- predicting captions localized temporally in a video -- should be able to handle long input videos, predict rich, detailed textual descriptions, and be able to produce outputs before processing the entire video. Current state-of-the-art models, however, process a fixed number of downsampled frames, and make a single full prediction after seeing the whole video. We propose a streaming dense video captioning model that consists of two novel components: First, we propose a new memory module, based on clustering incoming tokens, which can handle arbitrarily long videos as the memory is of a fixed size. Second, we develop a streaming decoding algorithm that enables our model to make predictions before the entire video has been processed. Our model achieves this streaming ability, and significantly improves the state-of-the-art on three dense video captioning benchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at https://github.com/google-research/scenic.

Summary

AI-Generated Summary

PDF132November 26, 2024