LATTE3D:大规模摊销文本到增强3D合成
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis
March 22, 2024
作者: Kevin Xie, Jonathan Lorraine, Tianshi Cao, Jun Gao, James Lucas, Antonio Torralba, Sanja Fidler, Xiaohui Zeng
cs.AI
摘要
最近的文本生成3D方法产生了令人印象深刻的3D结果,但需要耗时的优化,每个提示可能需要长达一小时。类似ATT3D的摊销方法同时优化多个提示,以提高效率,实现快速文本到3D合成。然而,它们无法捕捉高频几何和纹理细节,并且难以扩展到大型提示集,因此泛化能力较差。我们引入LATTE3D,解决这些限制,实现在显著更大的提示集上快速、高质量的生成。我们方法的关键在于:1)构建可扩展的架构,2)利用3D数据在优化过程中通过3D感知扩散先验、形状正则化和模型初始化实现对多样化和复杂训练提示的稳健性。LATTE3D摊销神经场和纹理表面生成,以在单次前向传递中生成高度详细的纹理网格。LATTE3D在400毫秒内生成3D对象,并可以通过快速测试时间优化进一步增强。
English
Recent text-to-3D generation approaches produce impressive 3D results but
require time-consuming optimization that can take up to an hour per prompt.
Amortized methods like ATT3D optimize multiple prompts simultaneously to
improve efficiency, enabling fast text-to-3D synthesis. However, they cannot
capture high-frequency geometry and texture details and struggle to scale to
large prompt sets, so they generalize poorly. We introduce LATTE3D, addressing
these limitations to achieve fast, high-quality generation on a significantly
larger prompt set. Key to our method is 1) building a scalable architecture and
2) leveraging 3D data during optimization through 3D-aware diffusion priors,
shape regularization, and model initialization to achieve robustness to diverse
and complex training prompts. LATTE3D amortizes both neural field and textured
surface generation to produce highly detailed textured meshes in a single
forward pass. LATTE3D generates 3D objects in 400ms, and can be further
enhanced with fast test-time optimization.Summary
AI-Generated Summary