ChatPaper.aiChatPaper

Style-NeRF2NeRF:从样式对齐的多视图图像进行的3D样式转移

Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images

June 19, 2024
作者: Haruo Fujiwara, Yusuke Mukuta, Tatsuya Harada
cs.AI

摘要

我们提出了一个简单而有效的流程,用于为立体场景实现风格化,利用了二维图像扩散模型的能力。给定从一组多视角图像重建的 NeRF 模型,我们通过使用由风格对齐的图像到图像扩散模型生成的风格化图像来优化源 NeRF 模型,实现立体风格转移。给定目标风格提示,我们首先通过利用带有注意力共享机制的深度条件扩散模型生成感知上相似的多视角图像。接下来,基于风格化的多视角图像,我们建议使用从预训练的 CNN 模型提取的特征图所基于的切片 Wasserstein 损失来引导风格转移过程。我们的流程包括解耦的步骤,允许用户测试各种提示想法,并在继续进行 NeRF 微调阶段之前预览风格化的立体结果。我们证明了我们的方法可以将多样的艺术风格转移到真实世界的立体场景,并具有竞争性的质量。
English
We propose a simple yet effective pipeline for stylizing a 3D scene, harnessing the power of 2D image diffusion models. Given a NeRF model reconstructed from a set of multi-view images, we perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model. Given a target style prompt, we first generate perceptually similar multi-view images by leveraging a depth-conditioned diffusion model with an attention-sharing mechanism. Next, based on the stylized multi-view images, we propose to guide the style transfer process with the sliced Wasserstein loss based on the feature maps extracted from a pre-trained CNN model. Our pipeline consists of decoupled steps, allowing users to test various prompt ideas and preview the stylized 3D result before proceeding to the NeRF fine-tuning stage. We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.

Summary

AI-Generated Summary

PDF51November 29, 2024