SF3D:穩定快速的三維網格重建,包括UV展開和照明解耦。
SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement
August 1, 2024
作者: Mark Boss, Zixuan Huang, Aaryaman Vasishta, Varun Jampani
cs.AI
摘要
我們提出了 SF3D,一種從單張圖像中快速且高質量地重建帶紋理物體網格的新方法,僅需 0.5 秒。與大多數現有方法不同,SF3D 專門為網格生成進行訓練,採用快速 UV 展開技術,實現了迅速生成紋理,而非依賴頂點顏色。該方法還學習預測材質參數和法線貼圖,以增強重建的 3D 網格的視覺質量。此外,SF3D 集成了去光步驟,有效去除低頻照明效果,確保重建的網格可以輕鬆應用於新的照明條件。實驗證明了 SF3D 相對於現有技術的優越性能。項目頁面:https://stable-fast-3d.github.io
English
We present SF3D, a novel method for rapid and high-quality textured object
mesh reconstruction from a single image in just 0.5 seconds. Unlike most
existing approaches, SF3D is explicitly trained for mesh generation,
incorporating a fast UV unwrapping technique that enables swift texture
generation rather than relying on vertex colors. The method also learns to
predict material parameters and normal maps to enhance the visual quality of
the reconstructed 3D meshes. Furthermore, SF3D integrates a delighting step to
effectively remove low-frequency illumination effects, ensuring that the
reconstructed meshes can be easily used in novel illumination conditions.
Experiments demonstrate the superior performance of SF3D over the existing
techniques. Project page: https://stable-fast-3d.github.ioSummary
AI-Generated Summary