GRM:大型高斯重建模型用於高效的3D重建和生成
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
March 21, 2024
作者: Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, Gordon Wetzstein
cs.AI
摘要
我們介紹了GRM,一個大規模的重建器,能夠在約0.1秒內從稀疏視圖圖像中恢復3D資產。GRM是一個前饋式基於Transformer的模型,能夠有效地整合多視圖信息,將輸入像素轉換為像素對齊的高斯分布,然後將其反投影,創建一組表示場景的密集分佈3D高斯分布。我們的Transformer架構和使用3D高斯分布共同解鎖了一個可擴展且高效的重建框架。廣泛的實驗結果證明了我們的方法在重建質量和效率方面優於其他方法。我們還展示了GRM在生成任務中的潛力,即文本到3D和圖像到3D,通過將其與現有的多視圖擴散模型集成。我們的項目網站位於:https://justimyhxu.github.io/projects/grm/。
English
We introduce GRM, a large-scale reconstructor capable of recovering a 3D
asset from sparse-view images in around 0.1s. GRM is a feed-forward
transformer-based model that efficiently incorporates multi-view information to
translate the input pixels into pixel-aligned Gaussians, which are unprojected
to create a set of densely distributed 3D Gaussians representing a scene.
Together, our transformer architecture and the use of 3D Gaussians unlock a
scalable and efficient reconstruction framework. Extensive experimental results
demonstrate the superiority of our method over alternatives regarding both
reconstruction quality and efficiency. We also showcase the potential of GRM in
generative tasks, i.e., text-to-3D and image-to-3D, by integrating it with
existing multi-view diffusion models. Our project website is at:
https://justimyhxu.github.io/projects/grm/.Summary
AI-Generated Summary