ChatPaper.aiChatPaper

UniGeo:驾驭视频扩散实现统一且一致的几何估计

UniGeo: Taming Video Diffusion for Unified Consistent Geometry Estimation

May 30, 2025
作者: Yang-Tian Sun, Xin Yu, Zehuan Huang, Yi-Hua Huang, Yuan-Chen Guo, Ziyi Yang, Yan-Pei Cao, Xiaojuan Qi
cs.AI

摘要

近期,利用扩散模型先验辅助单目几何估计(如深度和法线)的方法因其强大的泛化能力而备受关注。然而,现有研究大多聚焦于在单个视频帧的相机坐标系内估计几何属性,忽视了扩散模型在确定帧间对应关系方面的固有潜力。本研究中,我们证明通过恰当的设计与微调,视频生成模型的内在一致性可被有效用于实现一致的几何估计。具体而言,我们:1)选取与视频帧具有相同对应关系的全局坐标系下的几何属性作为预测目标;2)提出一种新颖且高效的条件化方法,通过重用位置编码实现;3)通过对共享相同对应关系的多个几何属性进行联合训练,提升性能。我们的成果在预测视频全局几何属性方面展现出卓越性能,并可直接应用于重建任务。即便仅基于静态视频数据训练,我们的方法也显示出向动态视频场景泛化的潜力。
English
Recently, methods leveraging diffusion model priors to assist monocular geometric estimation (e.g., depth and normal) have gained significant attention due to their strong generalization ability. However, most existing works focus on estimating geometric properties within the camera coordinate system of individual video frames, neglecting the inherent ability of diffusion models to determine inter-frame correspondence. In this work, we demonstrate that, through appropriate design and fine-tuning, the intrinsic consistency of video generation models can be effectively harnessed for consistent geometric estimation. Specifically, we 1) select geometric attributes in the global coordinate system that share the same correspondence with video frames as the prediction targets, 2) introduce a novel and efficient conditioning method by reusing positional encodings, and 3) enhance performance through joint training on multiple geometric attributes that share the same correspondence. Our results achieve superior performance in predicting global geometric attributes in videos and can be directly applied to reconstruction tasks. Even when trained solely on static video data, our approach exhibits the potential to generalize to dynamic video scenes.

Summary

AI-Generated Summary

PDF152June 2, 2025