ChatPaper.aiChatPaper

密集式影片即時字幕

Streaming Dense Video Captioning

April 1, 2024
作者: Xingyi Zhou, Anurag Arnab, Shyamal Buch, Shen Yan, Austin Myers, Xuehan Xiong, Arsha Nagrani, Cordelia Schmid
cs.AI

摘要

一個理想的密集影片字幕模型——能夠在影片中進行時間定位的字幕預測——應該能夠處理長輸入影片,預測豐富、詳細的文字描述,並且能夠在處理整個影片之前產生輸出。然而,目前最先進的模型處理一個固定數量的降採樣幀,並在看完整個影片後進行單一完整預測。我們提出了一個流式密集影片字幕模型,包括兩個新穎組件:首先,我們提出一個新的記憶模組,基於聚類傳入的標記,可以處理任意長度的影片,因為記憶是固定大小的。其次,我們開發了一個流式解碼算法,使我們的模型能夠在整個影片被處理之前進行預測。我們的模型實現了這種流式能力,並在三個密集影片字幕基準測試中顯著改進了最先進技術:ActivityNet、YouCook2 和 ViTT。我們的程式碼已在 https://github.com/google-research/scenic 釋出。
English
An ideal model for dense video captioning -- predicting captions localized temporally in a video -- should be able to handle long input videos, predict rich, detailed textual descriptions, and be able to produce outputs before processing the entire video. Current state-of-the-art models, however, process a fixed number of downsampled frames, and make a single full prediction after seeing the whole video. We propose a streaming dense video captioning model that consists of two novel components: First, we propose a new memory module, based on clustering incoming tokens, which can handle arbitrarily long videos as the memory is of a fixed size. Second, we develop a streaming decoding algorithm that enables our model to make predictions before the entire video has been processed. Our model achieves this streaming ability, and significantly improves the state-of-the-art on three dense video captioning benchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at https://github.com/google-research/scenic.

Summary

AI-Generated Summary

PDF132November 26, 2024