ChatPaper.aiChatPaper

ARC-章節:將時長一小時的影片結構化為可導航章節與階層式摘要

ARC-Chapter: Structuring Hour-Long Videos into Navigable Chapters and Hierarchical Summaries

November 18, 2025
作者: Junfu Pu, Teng Wang, Yixiao Ge, Yuying Ge, Chen Li, Ying Shan
cs.AI

摘要

一小時以上長影片(如講座、播客、紀錄片)的普及加劇了對高效內容結構化的需求。然而現有方法受限於小規模訓練數據,且註解通常簡短粗糙,難以泛化至長影片中的細膩內容轉換。我們提出首個大規模影片章節化模型ARC-Chapter,其基於百萬級長影片章節數據訓練,具備雙語、時間錨定及層次化章節註解特性。為實現此目標,我們通過結構化流程構建了英漢雙語章節數據集,將語音識別文本、場景文字與視覺描述統一整合為從短標題到長摘要的多層級註解。實驗證明,無論是數據規模還是標註密度提升,均能帶來明顯的性能增益。此外,我們設計了新型評估指標GRACE,融合多對一片段重疊度與語義相似度,更能反映實際章節劃分的靈活性。大量實驗表明,ARC-Chapter以顯著優勢創建新標杆,F1分數較先前最佳方法提升14.0%,SODA分數提升11.3%。該模型還展現出卓越的遷移能力,在YouCook2的密集影片描述等下游任務中刷新了現有最佳性能。
English
The proliferation of hour-long videos (e.g., lectures, podcasts, documentaries) has intensified demand for efficient content structuring. However, existing approaches are constrained by small-scale training with annotations that are typical short and coarse, restricting generalization to nuanced transitions in long videos. We introduce ARC-Chapter, the first large-scale video chaptering model trained on over million-level long video chapters, featuring bilingual, temporally grounded, and hierarchical chapter annotations. To achieve this goal, we curated a bilingual English-Chinese chapter dataset via a structured pipeline that unifies ASR transcripts, scene texts, visual captions into multi-level annotations, from short title to long summaries. We demonstrate clear performance improvements with data scaling, both in data volume and label intensity. Moreover, we design a new evaluation metric termed GRACE, which incorporates many-to-one segment overlaps and semantic similarity, better reflecting real-world chaptering flexibility. Extensive experiments demonstrate that ARC-Chapter establishes a new state-of-the-art by a significant margin, outperforming the previous best by 14.0% in F1 score and 11.3% in SODA score. Moreover, ARC-Chapter shows excellent transferability, improving the state-of-the-art on downstream tasks like dense video captioning on YouCook2.
PDF162December 2, 2025