AToM:使用2D擴散的攤銷式文本轉網格
AToM: Amortized Text-to-Mesh using 2D Diffusion
February 1, 2024
作者: Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, Igor Gilitschenski, Jian Ren, Bernard Ghanem, Kfir Aberman, Sergey Tulyakov
cs.AI
摘要
我們介紹了分期攤銷文本轉網格(AToM),這是一種優化的前饋式文本轉網格框架,可同時應用於多個文本提示。與現有的文本轉3D方法相比,這些方法通常需要耗時的每個提示的優化,並且通常輸出的表示形式不是多邊形網格,AToM在不到1秒的時間內直接生成高質量的帶紋理網格,訓練成本降低約10倍,並且具有泛化到未見提示的能力。我們的關鍵思想是一種基於新穎的三平面文本轉網格架構,採用兩階段攤銷優化策略,確保穩定的訓練並實現可擴展性。通過在各種提示基準上進行大量實驗,AToM在DF415數據集中的準確性顯著優於最先進的攤銷方法,達到超過4倍的準確性,並產生更具區分性和高質量的3D輸出。AToM展示了強大的泛化能力,為未見的插值提示提供細緻的3D資產,而無需在推斷期間進行進一步優化,這與每個提示的解決方案不同。
English
We introduce Amortized Text-to-Mesh (AToM), a feed-forward text-to-mesh
framework optimized across multiple text prompts simultaneously. In contrast to
existing text-to-3D methods that often entail time-consuming per-prompt
optimization and commonly output representations other than polygonal meshes,
AToM directly generates high-quality textured meshes in less than 1 second with
around 10 times reduction in the training cost, and generalizes to unseen
prompts. Our key idea is a novel triplane-based text-to-mesh architecture with
a two-stage amortized optimization strategy that ensures stable training and
enables scalability. Through extensive experiments on various prompt
benchmarks, AToM significantly outperforms state-of-the-art amortized
approaches with over 4 times higher accuracy (in DF415 dataset) and produces
more distinguishable and higher-quality 3D outputs. AToM demonstrates strong
generalizability, offering finegrained 3D assets for unseen interpolated
prompts without further optimization during inference, unlike per-prompt
solutions.