RoCoTex:一种稳健的基于扩散模型的一致纹理合成方法
RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models
September 30, 2024
作者: Jangyeong Kim, Donggoo Kang, Junyoung Choi, Jeonga Wi, Junho Gwon, Jiun Bae, Dumim Yoon, Junghyun Han
cs.AI
摘要
最近,文本到纹理生成引起了越来越多的关注,但现有方法往往存在视角不一致、明显接缝以及纹理与底层网格不对齐的问题。在本文中,我们提出了一种稳健的文本到纹理方法,用于生成与网格对齐一致且无缝的纹理。我们的方法利用最先进的2D扩散模型,包括SDXL和多个ControlNets,来捕捉生成纹理中的结构特征和复杂细节。该方法还采用了对称视角合成策略,结合区域提示来增强视角一致性。此外,它引入了新颖的纹理混合和软修补技术,显著减少了接缝区域。大量实验证明,我们的方法优于现有的最先进方法。
English
Text-to-texture generation has recently attracted increasing attention, but
existing methods often suffer from the problems of view inconsistencies,
apparent seams, and misalignment between textures and the underlying mesh. In
this paper, we propose a robust text-to-texture method for generating
consistent and seamless textures that are well aligned with the mesh. Our
method leverages state-of-the-art 2D diffusion models, including SDXL and
multiple ControlNets, to capture structural features and intricate details in
the generated textures. The method also employs a symmetrical view synthesis
strategy combined with regional prompts for enhancing view consistency.
Additionally, it introduces novel texture blending and soft-inpainting
techniques, which significantly reduce the seam regions. Extensive experiments
demonstrate that our method outperforms existing state-of-the-art methods.Summary
AI-Generated Summary