Styl3R:面向任意场景与风格的即时三维风格化重建
Styl3R: Instant 3D Stylized Reconstruction for Arbitrary Scenes and Styles
May 27, 2025
作者: Peng Wang, Xiang Liu, Peidong Liu
cs.AI
摘要
即時風格化三維場景,同時保持多視角一致性並忠實地模仿風格圖像,仍是一大挑戰。當前最先進的三維風格化方法通常需要計算密集的測試時優化,將藝術特徵轉移至預訓練的三維表示中,且往往依賴於密集的姿態輸入圖像。相反,我們利用前饋重建模型的最新進展,展示了一種新穎的方法,能在不到一秒的時間內,使用未定位的稀疏視角場景圖像和任意風格圖像,實現直接的三維風格化。為解決重建與風格化之間固有的解耦問題,我們引入了一種分支架構,分離結構建模與外觀著色,有效防止風格轉移扭曲底層的三維場景結構。此外,我們採用身份損失,通過新視角合成任務促進風格化模型的預訓練。這一策略還使我們的模型在微調以適應風格化的同時,保留了原有的重建能力。通過使用域內和域外數據集進行的全面評估表明,我們的方法能生成高質量的風格化三維內容,實現風格與場景外觀的優越融合,同時在多視角一致性和效率方面也超越了現有方法。
English
Stylizing 3D scenes instantly while maintaining multi-view consistency and
faithfully resembling a style image remains a significant challenge. Current
state-of-the-art 3D stylization methods typically involve computationally
intensive test-time optimization to transfer artistic features into a
pretrained 3D representation, often requiring dense posed input images. In
contrast, leveraging recent advances in feed-forward reconstruction models, we
demonstrate a novel approach to achieve direct 3D stylization in less than a
second using unposed sparse-view scene images and an arbitrary style image. To
address the inherent decoupling between reconstruction and stylization, we
introduce a branched architecture that separates structure modeling and
appearance shading, effectively preventing stylistic transfer from distorting
the underlying 3D scene structure. Furthermore, we adapt an identity loss to
facilitate pre-training our stylization model through the novel view synthesis
task. This strategy also allows our model to retain its original reconstruction
capabilities while being fine-tuned for stylization. Comprehensive evaluations,
using both in-domain and out-of-domain datasets, demonstrate that our approach
produces high-quality stylized 3D content that achieve a superior blend of
style and scene appearance, while also outperforming existing methods in terms
of multi-view consistency and efficiency.Summary
AI-Generated Summary