ChatPaper.aiChatPaper

DiffusionGPT:以LLM驅動的文本到圖像生成系統

DiffusionGPT: LLM-Driven Text-to-Image Generation System

January 18, 2024
作者: Jie Qin, Jie Wu, Weifeng Chen, Yuxi Ren, Huixia Li, Hefeng Wu, Xuefeng Xiao, Rui Wang, Shilei Wen
cs.AI

摘要

擴散模型為影像生成領域開辟了新的途徑,導致高品質模型在開源平台上的廣泛應用。然而,當前文本到影像系統面臨的一個主要挑戰是往往無法處理多樣的輸入,或者僅限於單一模型結果。當前統一的嘗試通常可分為兩個正交方面:i) 解析輸入階段的多樣提示;ii) 啟動專家模型以輸出。為了兼顧兩者的優勢,我們提出了DiffusionGPT,利用大型語言模型(LLM)提供統一的生成系統,能夠無縫地適應各種類型的提示並整合領域專家模型。DiffusionGPT根據先前知識為各種生成模型構建特定領域的樹狀結構。當提供輸入時,LLM解析提示並利用思維樹來指導選擇適當的模型,從而放寬輸入限制,確保在不同領域表現卓越。此外,我們引入了優勢數據庫,其中思維樹通過人類反饋得以豐富,將模型選擇過程與人類偏好保持一致。通過大量實驗和比較,我們展示了DiffusionGPT的有效性,展示了其在不同領域推動影像合成極限的潛力。
English
Diffusion models have opened up new avenues for the field of image generation, resulting in the proliferation of high-quality models shared on open-source platforms. However, a major challenge persists in current text-to-image systems are often unable to handle diverse inputs, or are limited to single model results. Current unified attempts often fall into two orthogonal aspects: i) parse Diverse Prompts in input stage; ii) activate expert model to output. To combine the best of both worlds, we propose DiffusionGPT, which leverages Large Language Models (LLM) to offer a unified generation system capable of seamlessly accommodating various types of prompts and integrating domain-expert models. DiffusionGPT constructs domain-specific Trees for various generative models based on prior knowledge. When provided with an input, the LLM parses the prompt and employs the Trees-of-Thought to guide the selection of an appropriate model, thereby relaxing input constraints and ensuring exceptional performance across diverse domains. Moreover, we introduce Advantage Databases, where the Tree-of-Thought is enriched with human feedback, aligning the model selection process with human preferences. Through extensive experiments and comparisons, we demonstrate the effectiveness of DiffusionGPT, showcasing its potential for pushing the boundaries of image synthesis in diverse domains.
PDF324December 15, 2024