VMem:基於Surfel索引視圖記憶體的一致性互動式影片場景生成
VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed View Memory
June 23, 2025
作者: Runjia Li, Philip Torr, Andrea Vedaldi, Tomas Jakab
cs.AI
摘要
我們提出了一種新穎的記憶機制,用於構建能夠互動探索環境的視頻生成器。以往類似成果是通過對場景的2D視圖進行外繪,同時逐步重建其3D幾何來實現的,這種方法會迅速累積誤差;或是通過具有短上下文窗口的視頻生成器來實現,但這些方法難以長期保持場景的一致性。為解決這些限制,我們引入了基於表面元素索引的視圖記憶(VMem),該機制通過根據觀察到的3D表面元素(surfels)對過去的視圖進行幾何索引來記住它們。VMem能夠在生成新視圖時高效檢索最相關的過去視圖。通過僅專注於這些相關視圖,我們的方法以遠低於使用所有過去視圖作為上下文的計算成本,實現了對想象環境的一致探索。我們在具有挑戰性的長期場景合成基準上評估了我們的方法,並展示了在保持場景一致性和相機控制方面相較於現有方法的優越性能。
English
We propose a novel memory mechanism to build video generators that can
explore environments interactively. Similar results have previously been
achieved by out-painting 2D views of the scene while incrementally
reconstructing its 3D geometry, which quickly accumulates errors, or by video
generators with a short context window, which struggle to maintain scene
coherence over the long term. To address these limitations, we introduce
Surfel-Indexed View Memory (VMem), a mechanism that remembers past views by
indexing them geometrically based on the 3D surface elements (surfels) they
have observed. VMem enables the efficient retrieval of the most relevant past
views when generating new ones. By focusing only on these relevant views, our
method produces consistent explorations of imagined environments at a fraction
of the computational cost of using all past views as context. We evaluate our
approach on challenging long-term scene synthesis benchmarks and demonstrate
superior performance compared to existing methods in maintaining scene
coherence and camera control.