ChatPaper.aiChatPaper

Paint3D:使用無燈光紋理擴散模型繪製任何3D物件

Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models

December 21, 2023
作者: Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, BIN FU, Yong Liu, Gang Yu
cs.AI

摘要

本文介紹了Paint3D,一個新穎的從粗到細的生成框架,能夠根據文本或圖像輸入,為未紋理化的3D網格生成高分辨率、無照明且多樣化的2K UV紋理貼圖。所解決的關鍵挑戰是生成高質量的紋理,而不包含嵌入的照明信息,這使得紋理可以在現代圖形管線中重新照明或重新編輯。為了實現這一目標,我們的方法首先利用預先訓練的深度感知2D擴散模型生成視角條件圖像,並進行多視角紋理融合,生成初始的粗糙紋理貼圖。然而,由於2D模型無法完全表示3D形狀並禁用照明效果,粗糙紋理貼圖呈現不完整區域和照明異常。為了解決這個問題,我們訓練了專門用於形狀感知精細化不完整區域和去除照明異常的獨立UV修補和UVHD擴散模型。通過這種從粗到細的過程,Paint3D可以生成高質量的2K UV紋理,保持語義一致性,同時無需照明,顯著推動了紋理化3D物體的最新技術。
English
This paper presents Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.
PDF241December 15, 2024