DreamScene:基於3D高斯分佈的端到端文本到3D場景生成
DreamScene: 3D Gaussian-based End-to-end Text-to-3D Scene Generation
July 18, 2025
作者: Haoran Li, Yuli Tian, Kun Lan, Yong Liao, Lin Wang, Pan Hui, Peng Yuan Zhou
cs.AI
摘要
從自然語言生成3D場景在遊戲、電影和設計領域具有巨大潛力。然而,現有方法在自動化、3D一致性和細粒度控制方面面臨挑戰。我們提出了DreamScene,這是一個從文本或對話生成高質量且可編輯3D場景的端到端框架。DreamScene始於場景規劃模塊,其中GPT-4代理推斷物體語義和空間約束,構建混合圖。基於圖的放置算法隨後生成結構化且無碰撞的佈局。根據此佈局,形成模式採樣(FPS)通過多時間步採樣和重建優化生成物體幾何,實現快速且逼真的合成。為確保全局一致性,DreamScene採用了一種針對室內外場景定制的漸進式相機採樣策略。最後,系統支持細粒度場景編輯,包括物體移動、外觀變化和4D動態運動。實驗表明,DreamScene在質量、一致性和靈活性上超越先前方法,為開放領域3D內容創作提供了實用解決方案。代碼和演示可在https://jahnsonblack.github.io/DreamScene-Full/獲取。
English
Generating 3D scenes from natural language holds great promise for
applications in gaming, film, and design. However, existing methods struggle
with automation, 3D consistency, and fine-grained control. We present
DreamScene, an end-to-end framework for high-quality and editable 3D scene
generation from text or dialogue. DreamScene begins with a scene planning
module, where a GPT-4 agent infers object semantics and spatial constraints to
construct a hybrid graph. A graph-based placement algorithm then produces a
structured, collision-free layout. Based on this layout, Formation Pattern
Sampling (FPS) generates object geometry using multi-timestep sampling and
reconstructive optimization, enabling fast and realistic synthesis. To ensure
global consistent, DreamScene employs a progressive camera sampling strategy
tailored to both indoor and outdoor settings. Finally, the system supports
fine-grained scene editing, including object movement, appearance changes, and
4D dynamic motion. Experiments demonstrate that DreamScene surpasses prior
methods in quality, consistency, and flexibility, offering a practical solution
for open-domain 3D content creation. Code and demos are available at
https://jahnsonblack.github.io/DreamScene-Full/.