ViSMaP:基于元提示的无监督长视频摘要生成
ViSMaP: Unsupervised Hour-long Video Summarisation by Meta-Prompting
April 22, 2025
作者: Jian Hu, Dimitrios Korkinof, Shaogang Gong, Mariano Beguerisse-Diaz
cs.AI
摘要
我们推出ViSMap:通过元提示实现无监督视频摘要,这是一个无需监督即可对长达一小时的视频进行摘要的系统。现有的大多数视频理解模型在处理预先分割的短视频事件时表现良好,但在处理相关事件稀疏分布且未预先分割的长视频时却显得力不从心。此外,长视频理解通常依赖于需要大量标注的监督式分层训练,这些标注成本高、速度慢且易出现不一致性。借助ViSMap,我们弥合了短视频(标注数据丰富)与长视频(标注数据匮乏)之间的鸿沟。我们利用大语言模型(LLMs)基于短视频片段描述生成优化的长视频伪摘要。这些伪摘要作为训练数据,用于生成长视频摘要的模型,从而绕过了对长视频进行昂贵标注的需求。具体而言,我们采用元提示策略,迭代生成并优化长视频的伪摘要。该策略利用从监督式短视频模型获得的短视频片段描述来指导摘要生成。每次迭代依次使用三个大语言模型:一个根据片段描述生成伪摘要,另一个评估其质量,第三个则优化生成器的提示。这种迭代是必要的,因为伪摘要的质量高度依赖于生成器的提示,且在不同视频间差异显著。我们在多个数据集上对摘要进行了广泛评估;结果表明,ViSMap在跨领域泛化的同时,其性能可与完全监督的最先进模型相媲美,且不牺牲性能。代码将在论文发表后公开。
English
We introduce ViSMap: Unsupervised Video Summarisation by Meta Prompting, a
system to summarise hour long videos with no-supervision. Most existing video
understanding models work well on short videos of pre-segmented events, yet
they struggle to summarise longer videos where relevant events are sparsely
distributed and not pre-segmented. Moreover, long-form video understanding
often relies on supervised hierarchical training that needs extensive
annotations which are costly, slow and prone to inconsistency. With ViSMaP we
bridge the gap between short videos (where annotated data is plentiful) and
long ones (where it's not). We rely on LLMs to create optimised
pseudo-summaries of long videos using segment descriptions from short ones.
These pseudo-summaries are used as training data for a model that generates
long-form video summaries, bypassing the need for expensive annotations of long
videos. Specifically, we adopt a meta-prompting strategy to iteratively
generate and refine creating pseudo-summaries of long videos. The strategy
leverages short clip descriptions obtained from a supervised short video model
to guide the summary. Each iteration uses three LLMs working in sequence: one
to generate the pseudo-summary from clip descriptions, another to evaluate it,
and a third to optimise the prompt of the generator. This iteration is
necessary because the quality of the pseudo-summaries is highly dependent on
the generator prompt, and varies widely among videos. We evaluate our summaries
extensively on multiple datasets; our results show that ViSMaP achieves
performance comparable to fully supervised state-of-the-art models while
generalising across domains without sacrificing performance. Code will be
released upon publication.Summary
AI-Generated Summary