將影片變形為遮罩:用於參考影片分割的流匹配
Deforming Videos to Masks: Flow Matching for Referring Video Segmentation
October 7, 2025
作者: Zanyi Wang, Dengyang Jiang, Liuzhuozheng Li, Sizhe Dang, Chengzu Li, Harry Yang, Guang Dai, Mengmeng Wang, Jingdong Wang
cs.AI
摘要
參考視頻對象分割(RVOS)旨在根據自然語言描述來分割視頻中的特定對象。RVOS的核心挑戰在於將抽象的語言概念錨定到一組特定的像素上,並在視頻的複雜動態中持續地對其進行分割。面對這一難題,先前的研究通常將任務分解為一個實用的“定位-然後-分割”流程。然而,這種級聯設計通過將語義簡化為粗略的幾何提示(例如點)來創建信息瓶頸,並且由於分割過程通常與初始的語言基礎解耦,難以保持時間一致性。為克服這些根本性限制,我們提出了FlowRVS,這是一個將RVOS重新概念化為條件連續流問題的新框架。這使我們能夠利用預訓練的T2V模型的固有優勢,實現精細的像素控制、文本-視頻語義對齊以及時間連貫性。與傳統的從噪聲生成掩碼或直接預測掩碼不同,我們通過學習從視頻的整體表示到其目標掩碼的直接、語言引導的變形來重新表述任務。我們的一階段生成方法在所有主要RVOS基準測試中均取得了新的最先進成果。具體而言,在MeViS中達到了51.1的J&F(比先前SOTA提高了1.6),在零樣本Ref-DAVIS17中達到了73.3(提高了2.7),展示了將視頻理解任務建模為連續變形過程的顯著潛力。
English
Referring Video Object Segmentation (RVOS) requires segmenting specific
objects in a video guided by a natural language description. The core challenge
of RVOS is to anchor abstract linguistic concepts onto a specific set of pixels
and continuously segment them through the complex dynamics of a video. Faced
with this difficulty, prior work has often decomposed the task into a pragmatic
`locate-then-segment' pipeline. However, this cascaded design creates an
information bottleneck by simplifying semantics into coarse geometric prompts
(e.g, point), and struggles to maintain temporal consistency as the segmenting
process is often decoupled from the initial language grounding. To overcome
these fundamental limitations, we propose FlowRVS, a novel framework that
reconceptualizes RVOS as a conditional continuous flow problem. This allows us
to harness the inherent strengths of pretrained T2V models, fine-grained pixel
control, text-video semantic alignment, and temporal coherence. Instead of
conventional generating from noise to mask or directly predicting mask, we
reformulate the task by learning a direct, language-guided deformation from a
video's holistic representation to its target mask. Our one-stage, generative
approach achieves new state-of-the-art results across all major RVOS
benchmarks. Specifically, achieving a J&F of 51.1 in
MeViS (+1.6 over prior SOTA) and 73.3 in the zero shot Ref-DAVIS17 (+2.7),
demonstrating the significant potential of modeling video understanding tasks
as continuous deformation processes.