ChatPaper.aiChatPaper

FlashTex:具有LightControlNet的快速可重新照明網格紋理化

FlashTex: Fast Relightable Mesh Texturing with LightControlNet

February 20, 2024
作者: Kangle Deng, Timothy Omernick, Alexander Weiss, Deva Ramanan, Jun-Yan Zhu, Tinghui Zhou, Maneesh Agrawala
cs.AI

摘要

手動為3D網格創建紋理是耗時的,即使對於專家視覺內容創作者也是如此。我們提出了一種快速方法,可以根據用戶提供的文本提示自動為輸入的3D網格上紋理。重要的是,我們的方法將照明與表面材料/反射從生成的紋理中分離出來,使得網格可以在任何照明環境中進行正確的重新照明和渲染。我們引入了LightControlNet,這是一種基於ControlNet架構的新的文本到圖像模型,它允許將期望的照明規格化為模型的條件圖像。然後,我們的文本到紋理流程通過兩個階段構建紋理。第一階段使用LightControlNet生成網格的一組稀疏且視覺上一致的參考視圖。第二階段應用基於得分蒸餾採樣(SDS)的紋理優化,該方法與LightControlNet合作,從而提高紋理質量,同時將表面材料與照明分離。我們的流程比以前的文本到紋理方法快得多,同時產生高質量且可重新照明的紋理。
English
Manually creating textures for 3D meshes is time-consuming, even for expert visual content creators. We propose a fast approach for automatically texturing an input 3D mesh based on a user-provided text prompt. Importantly, our approach disentangles lighting from surface material/reflectance in the resulting texture so that the mesh can be properly relit and rendered in any lighting environment. We introduce LightControlNet, a new text-to-image model based on the ControlNet architecture, which allows the specification of the desired lighting as a conditioning image to the model. Our text-to-texture pipeline then constructs the texture in two stages. The first stage produces a sparse set of visually consistent reference views of the mesh using LightControlNet. The second stage applies a texture optimization based on Score Distillation Sampling (SDS) that works with LightControlNet to increase the texture quality while disentangling surface material from lighting. Our pipeline is significantly faster than previous text-to-texture methods, while producing high-quality and relightable textures.
PDF151December 15, 2024