TimeChat-在线:流媒体视频中80%的视觉标记天然冗余
TimeChat-Online: 80% Visual Tokens are Naturally Redundant in Streaming Videos
April 24, 2025
作者: Linli Yao, Yicheng Li, Yuancheng Wei, Lei Li, Shuhuai Ren, Yuanxin Liu, Kun Ouyang, Lean Wang, Shicheng Li, Sida Li, Lingpeng Kong, Qi Liu, Yuanxing Zhang, Xu Sun
cs.AI
摘要
在线视频平台,尤其是直播服务的迅猛发展,催生了对实时视频理解系统的迫切需求。这类系统必须处理连续的视频流,并即时响应用户查询,这对当前的视频大语言模型(VideoLLMs)提出了独特挑战。尽管现有的VideoLLMs在处理完整视频方面表现出色,但在流媒体场景中却面临显著局限,主要因其无法高效处理密集且冗余的帧。我们推出了TimeChat-Online,一款革新实时视频交互的新型在线VideoLLM。其核心在于我们创新的差分令牌丢弃(DTD)模块,该模块有效解决了流媒体视频中视觉冗余的根本难题。借鉴人类视觉感知中的变化盲视现象,DTD在保留有意义的时间变化的同时,过滤掉帧间静态冗余内容。令人瞩目的是,我们的实验显示,DTD在StreamingBench上实现了82.8%的视频令牌减少,同时保持了98%的性能,揭示了流媒体视频中超过80%的视觉内容自然冗余,无需语言指导。为了支持无缝实时交互,我们发布了TimeChat-Online-139K,一个包含多样化交互模式的综合流媒体视频数据集,涵盖回溯、当前感知及未来响应等场景。TimeChat-Online独有的主动响应能力,通过DTD持续监控视频场景转换自然实现,使其区别于传统方法。我们广泛的评估表明,TimeChat-Online在流媒体基准测试(StreamingBench和OvOBench)上表现卓越,同时在长视频任务如Video-MME和MLVU上保持竞争力。
English
The rapid growth of online video platforms, particularly live streaming
services, has created an urgent need for real-time video understanding systems.
These systems must process continuous video streams and respond to user queries
instantaneously, presenting unique challenges for current Video Large Language
Models (VideoLLMs). While existing VideoLLMs excel at processing complete
videos, they face significant limitations in streaming scenarios due to their
inability to handle dense, redundant frames efficiently. We introduce
TimeChat-Online, a novel online VideoLLM that revolutionizes real-time video
interaction. At its core lies our innovative Differential Token Drop (DTD)
module, which addresses the fundamental challenge of visual redundancy in
streaming videos. Drawing inspiration from human visual perception's Change
Blindness phenomenon, DTD preserves meaningful temporal changes while filtering
out static, redundant content between frames. Remarkably, our experiments
demonstrate that DTD achieves an 82.8% reduction in video tokens while
maintaining 98% performance on StreamingBench, revealing that over 80% of
visual content in streaming videos is naturally redundant without requiring
language guidance. To enable seamless real-time interaction, we present
TimeChat-Online-139K, a comprehensive streaming video dataset featuring diverse
interaction patterns including backward-tracing, current-perception, and
future-responding scenarios. TimeChat-Online's unique Proactive Response
capability, naturally achieved through continuous monitoring of video scene
transitions via DTD, sets it apart from conventional approaches. Our extensive
evaluation demonstrates TimeChat-Online's superior performance on streaming
benchmarks (StreamingBench and OvOBench) and maintaining competitive results on
long-form video tasks such as Video-MME and MLVU.Summary
AI-Generated Summary