MaPa:針對3D形狀的文本驅動逼真材質繪製
MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
April 26, 2024
作者: Shangzhan Zhang, Sida Peng, Tao Xu, Yuanbo Yang, Tianrun Chen, Nan Xue, Yujun Shen, Hujun Bao, Ruizhen Hu, Xiaowei Zhou
cs.AI
摘要
本文旨在從文本描述中生成3D網格的材質。與現有的合成紋理貼圖方法不同,我們提出生成分段程序化材質圖作為外觀表示,這支持高質量渲染並提供在編輯方面的實質靈活性。我們建議利用預先訓練的2D擴散模型作為連接文本和材質圖的橋樑,而不是依賴於大量配對數據,即帶有材質圖和相應文本描述的3D網格,來訓練材質圖生成模型。具體而言,我們的方法將形狀分解為一組部分,並設計了一個部分控制的擴散模型來合成與網格部件對齊的2D圖像。基於生成的圖像,我們初始化材質圖的參數,並通過可微渲染模塊對其進行微調,以生成符合文本描述的材質。大量實驗證明了我們的框架在真實感、解析度和可編輯性方面相對於現有方法的優越性能。項目頁面:https://zhanghe3z.github.io/MaPa/
English
This paper aims to generate materials for 3D meshes from text descriptions.
Unlike existing methods that synthesize texture maps, we propose to generate
segment-wise procedural material graphs as the appearance representation, which
supports high-quality rendering and provides substantial flexibility in
editing. Instead of relying on extensive paired data, i.e., 3D meshes with
material graphs and corresponding text descriptions, to train a material graph
generative model, we propose to leverage the pre-trained 2D diffusion model as
a bridge to connect the text and material graphs. Specifically, our approach
decomposes a shape into a set of segments and designs a segment-controlled
diffusion model to synthesize 2D images that are aligned with mesh parts. Based
on generated images, we initialize parameters of material graphs and fine-tune
them through the differentiable rendering module to produce materials in
accordance with the textual description. Extensive experiments demonstrate the
superior performance of our framework in photorealism, resolution, and
editability over existing methods. Project page:
https://zhanghe3z.github.io/MaPa/Summary
AI-Generated Summary