ChatPaper.aiChatPaper

DiffusionGPT:以LLM为驱动的文本到图像生成系统

DiffusionGPT: LLM-Driven Text-to-Image Generation System

January 18, 2024
作者: Jie Qin, Jie Wu, Weifeng Chen, Yuxi Ren, Huixia Li, Hefeng Wu, Xuefeng Xiao, Rui Wang, Shilei Wen
cs.AI

摘要

扩散模型为图像生成领域开辟了新的道路,导致高质量模型在开源平台上被广泛分享。然而,当前文本到图像系统面临的一个主要挑战是往往无法处理多样化的输入,或者局限于单一模型的结果。当前的统一尝试通常可分为两个正交方面:i)在输入阶段解析多样化提示;ii)激活专家模型以输出。为了兼顾两者的优势,我们提出了DiffusionGPT,它利用大型语言模型(LLM)提供了一个统一的生成系统,能够无缝地适应各种类型的提示并整合领域专家模型。DiffusionGPT基于先验知识为各种生成模型构建特定领域的树结构。当提供一个输入时,LLM解析提示并利用思维树来指导选择合适的模型,从而放宽输入约束并确保在不同领域表现出色。此外,我们引入了优势数据库,其中思维树得到人类反馈的丰富,使模型选择过程与人类偏好保持一致。通过大量实验和比较,我们展示了DiffusionGPT的有效性,展示了它在不同领域推动图像合成边界的潜力。
English
Diffusion models have opened up new avenues for the field of image generation, resulting in the proliferation of high-quality models shared on open-source platforms. However, a major challenge persists in current text-to-image systems are often unable to handle diverse inputs, or are limited to single model results. Current unified attempts often fall into two orthogonal aspects: i) parse Diverse Prompts in input stage; ii) activate expert model to output. To combine the best of both worlds, we propose DiffusionGPT, which leverages Large Language Models (LLM) to offer a unified generation system capable of seamlessly accommodating various types of prompts and integrating domain-expert models. DiffusionGPT constructs domain-specific Trees for various generative models based on prior knowledge. When provided with an input, the LLM parses the prompt and employs the Trees-of-Thought to guide the selection of an appropriate model, thereby relaxing input constraints and ensuring exceptional performance across diverse domains. Moreover, we introduce Advantage Databases, where the Tree-of-Thought is enriched with human feedback, aligning the model selection process with human preferences. Through extensive experiments and comparisons, we demonstrate the effectiveness of DiffusionGPT, showcasing its potential for pushing the boundaries of image synthesis in diverse domains.
PDF324December 15, 2024