ChatPaper.aiChatPaper

超梦想者:从单张图像生成和编辑超逼真的3D内容

HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image

December 7, 2023
作者: Tong Wu, Zhibing Li, Shuai Yang, Pan Zhang, Xinggang Pan, Jiaqi Wang, Dahua Lin, Ziwei Liu
cs.AI

摘要

从单个图像创建3D内容是一个长期存在但非常理想的任务。最近的进展引入了2D扩散先验,产生了合理的结果。然而,现有方法对于后续生成的使用来说还不够超现实主义,因为用户无法从完整范围查看、渲染和编辑生成的3D内容。为了解决这些挑战,我们引入了HyperDreamer,具有几个关键设计和吸引人的特性:1)可查看:360度网格建模与高分辨率纹理使得可以从完整的观察点范围创建视觉上引人注目的3D模型。2)可渲染:细粒度语义分割和数据驱动的先验被纳入作为指导,学习合理的反照率、粗糙度和镜面特性的材料,实现语义感知的任意材料估计。3)可编辑:对于生成的模型或他们自己的数据,用户可以通过几次点击交互地选择任何区域,并通过基于文本的指导高效地编辑纹理。大量实验证明了HyperDreamer在建模区域感知材料、具有高分辨率纹理和实现用户友好编辑方面的有效性。我们相信HyperDreamer有望推动3D内容创建的发展,并在各个领域找到应用。
English
3D content creation from a single image is a long-standing yet highly desirable task. Recent advances introduce 2D diffusion priors, yielding reasonable results. However, existing methods are not hyper-realistic enough for post-generation usage, as users cannot view, render and edit the resulting 3D content from a full range. To address these challenges, we introduce HyperDreamer with several key designs and appealing properties: 1) Viewable: 360 degree mesh modeling with high-resolution textures enables the creation of visually compelling 3D models from a full range of observation points. 2) Renderable: Fine-grained semantic segmentation and data-driven priors are incorporated as guidance to learn reasonable albedo, roughness, and specular properties of the materials, enabling semantic-aware arbitrary material estimation. 3) Editable: For a generated model or their own data, users can interactively select any region via a few clicks and efficiently edit the texture with text-based guidance. Extensive experiments demonstrate the effectiveness of HyperDreamer in modeling region-aware materials with high-resolution textures and enabling user-friendly editing. We believe that HyperDreamer holds promise for advancing 3D content creation and finding applications in various domains.
PDF220December 15, 2024