ChatPaper.aiChatPaper

分而治之:语言模型可以规划和自我纠正,用于组合式文本到图像生成。

Divide and Conquer: Language Models can Plan and Self-Correct for Compositional Text-to-Image Generation

January 28, 2024
作者: Zhenyu Wang, Enze Xie, Aoxue Li, Zhongdao Wang, Xihui Liu, Zhenguo Li
cs.AI

摘要

尽管文本到图像模型在生成高质量图像方面取得了显著进展,但这些方法仍然难以确保在复杂文本提示的情况下对图像的可控性,特别是在保留对象属性和关系方面。在本文中,我们提出了CompAgent,这是一种无需训练的组合文本到图像生成方法,其核心是一个大型语言模型(LLM)代理。CompAgent的基本思想建立在一种分而治之的方法论之上。给定一个包含多个概念(包括对象、属性和关系)的复杂文本提示,LLM代理首先对其进行分解,这涉及提取单个对象、它们相关的属性以及预测连贯的场景布局。然后可以独立处理这些单个对象。随后,代理通过分析文本进行推理,规划并使用工具来组合这些孤立的对象。最后,我们的代理还融入了验证和人类反馈机制,以进一步纠正潜在的属性错误并完善生成的图像。在LLM代理的指导下,我们提出了一种无需调整的多概念定制模型和一种布局到图像生成模型作为概念组合的工具,以及一种局部图像编辑方法作为与代理进行交互以进行验证的工具。在这些工具中,场景布局控制着图像生成过程,以防止多个对象之间的混淆。大量实验证明了我们的组合文本到图像生成方法的优越性:CompAgent在T2I-CompBench上取得了超过10%的改进,这是一个开放世界组合T2I生成的综合基准。对各种相关任务的扩展也展示了我们的CompAgent对潜在应用的灵活性。
English
Despite significant advancements in text-to-image models for generating high-quality images, these methods still struggle to ensure the controllability of text prompts over images in the context of complex text prompts, especially when it comes to retaining object attributes and relationships. In this paper, we propose CompAgent, a training-free approach for compositional text-to-image generation, with a large language model (LLM) agent as its core. The fundamental idea underlying CompAgent is premised on a divide-and-conquer methodology. Given a complex text prompt containing multiple concepts including objects, attributes, and relationships, the LLM agent initially decomposes it, which entails the extraction of individual objects, their associated attributes, and the prediction of a coherent scene layout. These individual objects can then be independently conquered. Subsequently, the agent performs reasoning by analyzing the text, plans and employs the tools to compose these isolated objects. The verification and human feedback mechanism is finally incorporated into our agent to further correct the potential attribute errors and refine the generated images. Guided by the LLM agent, we propose a tuning-free multi-concept customization model and a layout-to-image generation model as the tools for concept composition, and a local image editing method as the tool to interact with the agent for verification. The scene layout controls the image generation process among these tools to prevent confusion among multiple objects. Extensive experiments demonstrate the superiority of our approach for compositional text-to-image generation: CompAgent achieves more than 10\% improvement on T2I-CompBench, a comprehensive benchmark for open-world compositional T2I generation. The extension to various related tasks also illustrates the flexibility of our CompAgent for potential applications.
PDF110December 15, 2024