ConceptLab:使用扩散先验约束进行创意生成
ConceptLab: Creative Generation using Diffusion Prior Constraints
August 3, 2023
作者: Elad Richardson, Kfir Goldberg, Yuval Alaluf, Daniel Cohen-Or
cs.AI
摘要
最近的文本到图像生成模型使我们能够将文字转化为生动、引人入胜的图像。随之而来的个性化技术激增也使我们能够在新场景中想象独特的概念。然而,一个耐人寻味的问题仍然存在:我们如何生成一个以前从未见过的新奇概念?在本文中,我们提出了创造性文本到图像生成的任务,我们试图生成一个广泛类别的新成员(例如,生成一种与所有现有宠物不同的宠物)。我们利用鲜为人知的扩散先验模型,并展示创造性生成问题可以被表述为对扩散先验输出空间的优化过程,从而产生一组“先验约束”。为了防止我们生成的概念收敛到现有成员,我们将一个问答模型整合进来,自适应地向优化问题添加新约束,鼓励模型发现越来越独特的创作。最后,我们展示我们的先验约束也可以作为一个强大的混合机制,使我们能够创建生成概念之间的混合体,为创造过程引入更多灵活性。
English
Recent text-to-image generative models have enabled us to transform our words
into vibrant, captivating imagery. The surge of personalization techniques that
has followed has also allowed us to imagine unique concepts in new scenes.
However, an intriguing question remains: How can we generate a new, imaginary
concept that has never been seen before? In this paper, we present the task of
creative text-to-image generation, where we seek to generate new members of a
broad category (e.g., generating a pet that differs from all existing pets). We
leverage the under-studied Diffusion Prior models and show that the creative
generation problem can be formulated as an optimization process over the output
space of the diffusion prior, resulting in a set of "prior constraints". To
keep our generated concept from converging into existing members, we
incorporate a question-answering model that adaptively adds new constraints to
the optimization problem, encouraging the model to discover increasingly more
unique creations. Finally, we show that our prior constraints can also serve as
a strong mixing mechanism allowing us to create hybrids between generated
concepts, introducing even more flexibility into the creative process.