ChatPaper.aiChatPaper

DA-Flow:基於擴散模型的退化感知光流估計

DA-Flow: Degradation-Aware Optical Flow Estimation with Diffusion Models

March 24, 2026
作者: Jaewon Min, Jaeeun Lee, Yeji Choi, Paul Hyunbin Cho, Jin Hyeon Kim, Tae-Young Lee, Jongsik Ahn, Hwayeong Lee, Seonghyun Park, Seungryong Kim
cs.AI

摘要

基於高品質資料訓練的光流模型在面對真實世界失真(如模糊、雜訊和壓縮偽影)時,性能往往會急遽下降。為突破此限制,我們提出「退化感知光流」這一新任務,旨在從真實失真影片中實現精確的稠密對應估計。我們的核心發現是:圖像修復擴散模型的中間表徵本質具備失真感知能力,但缺乏時間維度的感知。為解決此問題,我們通過全時空注意力機制將模型提升至跨幀感知層級,並實證證明所得特徵具備零樣本對應能力。基於此發現,我們提出DA-Flow混合架構,在迭代優化框架中將擴散特徵與卷積特徵進行融合。在多個基準測試中,DA-Flow在嚴重失真條件下顯著超越現有光流方法。
English
Optical flow models trained on high-quality data often degrade severely when confronted with real-world corruptions such as blur, noise, and compression artifacts. To overcome this limitation, we formulate Degradation-Aware Optical Flow, a new task targeting accurate dense correspondence estimation from real-world corrupted videos. Our key insight is that the intermediate representations of image restoration diffusion models are inherently corruption-aware but lack temporal awareness. To address this limitation, we lift the model to attend across adjacent frames via full spatio-temporal attention, and empirically demonstrate that the resulting features exhibit zero-shot correspondence capabilities. Based on this finding, we present DA-Flow, a hybrid architecture that fuses these diffusion features with convolutional features within an iterative refinement framework. DA-Flow substantially outperforms existing optical flow methods under severe degradation across multiple benchmarks.
PDF351March 26, 2026