ChatPaper.aiChatPaper

Hash3D:無需訓練的 3D 生成加速

Hash3D: Training-free Acceleration for 3D Generation

April 9, 2024
作者: Xingyi Yang, Xinchao Wang
cs.AI

摘要

3D生成建模的演進明顯受益於2D擴散模型的採用。儘管取得進展,但繁瑣的優化過程本身對效率構成了重要障礙。在本文中,我們介紹Hash3D,一種無需模型訓練即可加速3D生成的通用方法。Hash3D的核心在於洞察到從相鄰時間步長和相機角度渲染的圖像中存在特徵圖冗余。通過有效地對這些特徵圖進行哈希和重複使用,Hash3D大大減少了冗餘計算,從而加速了3D生成任務中擴散模型的推斷。我們通過一種自適應基於網格的哈希實現了這一點。令人驚訝的是,這種特徵共享機制不僅加快了生成速度,還增強了合成的3D物體的平滑度和視圖一致性。我們的實驗涵蓋了5個文本到3D和3個圖像到3D模型,展示了Hash3D在加速優化方面的多功能性,將效率提高了1.3至4倍。此外,Hash3D與3D高斯飛濺的整合大大加快了3D模型的創建速度,將文本到3D處理時間縮短到約10分鐘,圖像到3D轉換時間縮短到約30秒。項目頁面位於https://adamdad.github.io/hash3D/。
English
The evolution of 3D generative modeling has been notably propelled by the adoption of 2D diffusion models. Despite this progress, the cumbersome optimization process per se presents a critical hurdle to efficiency. In this paper, we introduce Hash3D, a universal acceleration for 3D generation without model training. Central to Hash3D is the insight that feature-map redundancy is prevalent in images rendered from camera positions and diffusion time-steps in close proximity. By effectively hashing and reusing these feature maps across neighboring timesteps and camera angles, Hash3D substantially prevents redundant calculations, thus accelerating the diffusion model's inference in 3D generation tasks. We achieve this through an adaptive grid-based hashing. Surprisingly, this feature-sharing mechanism not only speed up the generation but also enhances the smoothness and view consistency of the synthesized 3D objects. Our experiments covering 5 text-to-3D and 3 image-to-3D models, demonstrate Hash3D's versatility to speed up optimization, enhancing efficiency by 1.3 to 4 times. Additionally, Hash3D's integration with 3D Gaussian splatting largely speeds up 3D model creation, reducing text-to-3D processing to about 10 minutes and image-to-3D conversion to roughly 30 seconds. The project page is at https://adamdad.github.io/hash3D/.

Summary

AI-Generated Summary

PDF130December 15, 2024