ChatPaper.aiChatPaper

AToM:使用2D扩散的摊销文本到网格

AToM: Amortized Text-to-Mesh using 2D Diffusion

February 1, 2024
作者: Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, Igor Gilitschenski, Jian Ren, Bernard Ghanem, Kfir Aberman, Sergey Tulyakov
cs.AI

摘要

我们介绍了摊销文本到网格(AToM),这是一个跨多个文本提示进行优化的前馈文本到网格框架。与现有的文本到3D方法相比,这些方法通常需要耗时的逐提示优化,并且通常输出多边形网格之外的表示形式不同,AToM可以在不到1秒的时间内直接生成高质量的带纹理网格,训练成本降低约10倍,并且可以泛化到未见过的提示。我们的关键思想是一种新颖的基于三平面的文本到网格架构,采用两阶段摊销优化策略,确保稳定训练并实现可扩展性。通过在各种提示基准上进行大量实验,AToM在DF415数据集中的准确性比最先进的摊销方法高出4倍以上,并产生更具区分度和更高质量的3D输出。AToM表现出很强的泛化能力,为未见过的插值提示提供细粒度的3D资产,而在推断过程中无需进一步优化,这与逐提示解决方案不同。
English
We introduce Amortized Text-to-Mesh (AToM), a feed-forward text-to-mesh framework optimized across multiple text prompts simultaneously. In contrast to existing text-to-3D methods that often entail time-consuming per-prompt optimization and commonly output representations other than polygonal meshes, AToM directly generates high-quality textured meshes in less than 1 second with around 10 times reduction in the training cost, and generalizes to unseen prompts. Our key idea is a novel triplane-based text-to-mesh architecture with a two-stage amortized optimization strategy that ensures stable training and enables scalability. Through extensive experiments on various prompt benchmarks, AToM significantly outperforms state-of-the-art amortized approaches with over 4 times higher accuracy (in DF415 dataset) and produces more distinguishable and higher-quality 3D outputs. AToM demonstrates strong generalizability, offering finegrained 3D assets for unseen interpolated prompts without further optimization during inference, unlike per-prompt solutions.
PDF113December 15, 2024