ChatPaper.aiChatPaper

LDM3D:用于三维的潜在扩散模型

LDM3D: Latent Diffusion Model for 3D

May 18, 2023
作者: Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, Vasudev Lal
cs.AI

摘要

本研究提出了一种用于3D的潜在扩散模型(LDM3D),可以从给定的文本提示生成图像和深度图数据,使用户能够从文本提示生成RGBD图像。LDM3D模型在包含RGB图像、深度图和标题的元组数据集上进行了微调,并通过大量实验进行了验证。我们还开发了一个名为DepthFusion的应用程序,利用生成的RGB图像和深度图使用TouchDesigner创建沉浸式和交互式的360度全景体验。这项技术有潜力改变广泛的行业,从娱乐和游戏到建筑和设计。总的来说,本文对生成式人工智能和计算机视觉领域做出了重要贡献,并展示了LDM3D和DepthFusion改变内容创作和数字体验的潜力。可以在 https://t.ly/tdi2 找到总结该方法的短视频。
English
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at https://t.ly/tdi2.
PDF112December 15, 2024