ChatPaper.aiChatPaper

Hash3D:无需训练的3D生成加速。

Hash3D: Training-free Acceleration for 3D Generation

April 9, 2024
作者: Xingyi Yang, Xinchao Wang
cs.AI

摘要

3D生成建模的演进显著受益于2D扩散模型的采用。尽管取得了进展,但繁琐的优化过程本身对效率构成了关键障碍。在本文中,我们介绍了Hash3D,这是一种无需模型训练的3D生成通用加速技术。Hash3D的核心思想是,从相邻时间步长和摄像机角度渲染的图像中存在特征图冗余。通过有效地对这些特征图进行哈希处理并在相邻时间步长和摄像机角度之间重复使用,Hash3D极大地减少了冗余计算,从而加速了3D生成任务中扩散模型的推断过程。我们通过自适应基于网格的哈希实现了这一点。令人惊讶的是,这种特征共享机制不仅加快了生成速度,还增强了合成的3D物体的平滑度和视角一致性。我们的实验涵盖了5个文本到3D和3个图像到3D模型,展示了Hash3D在加速优化方面的多样性,将效率提高了1.3至4倍。此外,Hash3D与3D高斯喷洒的集成大大加快了3D模型的创建速度,将文本到3D处理时间缩短至约10分钟,图像到3D转换时间缩短至大约30秒。项目页面位于https://adamdad.github.io/hash3D/。
English
The evolution of 3D generative modeling has been notably propelled by the adoption of 2D diffusion models. Despite this progress, the cumbersome optimization process per se presents a critical hurdle to efficiency. In this paper, we introduce Hash3D, a universal acceleration for 3D generation without model training. Central to Hash3D is the insight that feature-map redundancy is prevalent in images rendered from camera positions and diffusion time-steps in close proximity. By effectively hashing and reusing these feature maps across neighboring timesteps and camera angles, Hash3D substantially prevents redundant calculations, thus accelerating the diffusion model's inference in 3D generation tasks. We achieve this through an adaptive grid-based hashing. Surprisingly, this feature-sharing mechanism not only speed up the generation but also enhances the smoothness and view consistency of the synthesized 3D objects. Our experiments covering 5 text-to-3D and 3 image-to-3D models, demonstrate Hash3D's versatility to speed up optimization, enhancing efficiency by 1.3 to 4 times. Additionally, Hash3D's integration with 3D Gaussian splatting largely speeds up 3D model creation, reducing text-to-3D processing to about 10 minutes and image-to-3D conversion to roughly 30 seconds. The project page is at https://adamdad.github.io/hash3D/.

Summary

AI-Generated Summary

PDF130December 15, 2024