Paint3D:使用无光照纹理扩散模型绘制任何3D物体
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
December 21, 2023
作者: Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, BIN FU, Yong Liu, Gang Yu
cs.AI
摘要
本文介绍了Paint3D,这是一个新颖的由粗到细的生成框架,能够在文本或图像输入的条件下为未纹理化的3D网格生成高分辨率、无光照且多样化的2K UV纹理贴图。所解决的关键挑战是生成高质量纹理,而不包含嵌入的光照信息,这使得纹理可以在现代图形管线中重新照明或重新编辑。为实现这一目标,我们的方法首先利用预训练的深度感知2D扩散模型生成视角条件图像,并进行多视角纹理融合,生成初始粗糙纹理贴图。然而,由于2D模型无法完全表示3D形状并禁用光照效果,粗糙纹理贴图呈现出不完整区域和光照伪影。为解决这一问题,我们训练了专门用于形状感知细化不完整区域和去除光照伪影的独立UV修补和UVHD扩散模型。通过这一由粗到细的过程,Paint3D能够生成保持语义一致性的高质量2K UV纹理,同时无光照,显著推进了纹理化3D对象的最新技术水平。
English
This paper presents Paint3D, a novel coarse-to-fine generative framework that
is capable of producing high-resolution, lighting-less, and diverse 2K UV
texture maps for untextured 3D meshes conditioned on text or image inputs. The
key challenge addressed is generating high-quality textures without embedded
illumination information, which allows the textures to be re-lighted or
re-edited within modern graphics pipelines. To achieve this, our method first
leverages a pre-trained depth-aware 2D diffusion model to generate
view-conditional images and perform multi-view texture fusion, producing an
initial coarse texture map. However, as 2D models cannot fully represent 3D
shapes and disable lighting effects, the coarse texture map exhibits incomplete
areas and illumination artifacts. To resolve this, we train separate UV
Inpainting and UVHD diffusion models specialized for the shape-aware refinement
of incomplete areas and the removal of illumination artifacts. Through this
coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that
maintain semantic consistency while being lighting-less, significantly
advancing the state-of-the-art in texturing 3D objects.