ChatPaper.aiChatPaper

FlashTex:具有LightControlNet的快速可重照网格纹理化

FlashTex: Fast Relightable Mesh Texturing with LightControlNet

February 20, 2024
作者: Kangle Deng, Timothy Omernick, Alexander Weiss, Deva Ramanan, Jun-Yan Zhu, Tinghui Zhou, Maneesh Agrawala
cs.AI

摘要

手动为3D网格创建纹理即使对于专业的视觉内容创作者来说也是耗时的。我们提出了一种快速的方法,根据用户提供的文本提示自动为输入的3D网格着色。重要的是,我们的方法将光照与表面材质/反射在生成的纹理中解耦,使得网格可以在任何光照环境中得到正确的重新照明和渲染。我们引入了LightControlNet,这是一种基于ControlNet架构的新型文本到图像模型,允许将期望的光照规范作为模型的条件图像。然后,我们的文本到纹理流水线分两个阶段构建纹理。第一阶段使用LightControlNet生成网格的一组视觉上一致的稀疏参考视图。第二阶段应用基于得分蒸馏采样(SDS)的纹理优化,该优化与LightControlNet一起工作,以提高纹理质量同时解耦表面材质和光照。我们的流水线比先前的文本到纹理方法快得多,同时产生高质量且可重新照明的纹理。
English
Manually creating textures for 3D meshes is time-consuming, even for expert visual content creators. We propose a fast approach for automatically texturing an input 3D mesh based on a user-provided text prompt. Importantly, our approach disentangles lighting from surface material/reflectance in the resulting texture so that the mesh can be properly relit and rendered in any lighting environment. We introduce LightControlNet, a new text-to-image model based on the ControlNet architecture, which allows the specification of the desired lighting as a conditioning image to the model. Our text-to-texture pipeline then constructs the texture in two stages. The first stage produces a sparse set of visually consistent reference views of the mesh using LightControlNet. The second stage applies a texture optimization based on Score Distillation Sampling (SDS) that works with LightControlNet to increase the texture quality while disentangling surface material from lighting. Our pipeline is significantly faster than previous text-to-texture methods, while producing high-quality and relightable textures.
PDF151December 15, 2024