ChatPaper.aiChatPaper

MaPa:用于3D形状的基于文本的逼真材质绘制

MaPa: Text-driven Photorealistic Material Painting for 3D Shapes

April 26, 2024
作者: Shangzhan Zhang, Sida Peng, Tao Xu, Yuanbo Yang, Tianrun Chen, Nan Xue, Yujun Shen, Hujun Bao, Ruizhen Hu, Xiaowei Zhou
cs.AI

摘要

本文旨在从文本描述中生成3D网格的材质。与现有的合成纹理贴图的方法不同,我们提出生成分段程序化材质图作为外观表示,这种方法支持高质量渲染并在编辑方面具有相当的灵活性。我们提出利用预训练的2D扩散模型作为连接文本和材质图的桥梁,而不是依赖于大量配对数据,即带有材质图和相应文本描述的3D网格,来训练材质图生成模型。具体而言,我们的方法将形状分解为一组段,并设计了一个段控制的扩散模型,用于合成与网格部件对齐的2D图像。基于生成的图像,我们初始化材质图的参数,并通过可微分渲染模块对其进行微调,以生成符合文本描述的材质。大量实验证明,我们的框架在逼真度、分辨率和可编辑性方面优于现有方法。项目页面:https://zhanghe3z.github.io/MaPa/
English
This paper aims to generate materials for 3D meshes from text descriptions. Unlike existing methods that synthesize texture maps, we propose to generate segment-wise procedural material graphs as the appearance representation, which supports high-quality rendering and provides substantial flexibility in editing. Instead of relying on extensive paired data, i.e., 3D meshes with material graphs and corresponding text descriptions, to train a material graph generative model, we propose to leverage the pre-trained 2D diffusion model as a bridge to connect the text and material graphs. Specifically, our approach decomposes a shape into a set of segments and designs a segment-controlled diffusion model to synthesize 2D images that are aligned with mesh parts. Based on generated images, we initialize parameters of material graphs and fine-tune them through the differentiable rendering module to produce materials in accordance with the textual description. Extensive experiments demonstrate the superior performance of our framework in photorealism, resolution, and editability over existing methods. Project page: https://zhanghe3z.github.io/MaPa/

Summary

AI-Generated Summary

PDF131December 15, 2024