TokenVerse:在 Token 调制空间中的多概念个性化功能
TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space
January 21, 2025
作者: Daniel Garibi, Shahar Yadin, Roni Paiss, Omer Tov, Shiran Zada, Ariel Ephrat, Tomer Michaeli, Inbar Mosseri, Tali Dekel
cs.AI
摘要
我們提出了TokenVerse - 一種多概念個性化方法,利用預先訓練的文本到圖像擴散模型。我們的框架可以從僅一個圖像中解開複雜的視覺元素和屬性,同時實現從多個圖像中提取的概念組合的無縫插入和生成。與現有作品不同,TokenVerse 可以處理具有多個概念的多個圖像,並支持包括物體、配件、材料、姿勢和照明在內的各種概念。我們的工作利用了基於DiT的文本到圖像模型,其中輸入文本通過注意力和調制(移位和縮放)影響生成。我們觀察到調制空間是語義的,並且可以對複雜概念進行局部控制。基於這一洞察,我們設計了一個基於優化的框架,該框架以圖像和文本描述作為輸入,並為每個單詞找到調制空間中的不同方向。然後可以使用這些方向來生成以所需配置結合學習概念的新圖像。我們展示了TokenVerse 在具有挑戰性的個性化設置中的有效性,並展示了它相對於現有方法的優勢。項目網頁位於 https://token-verse.github.io/
English
We present TokenVerse -- a method for multi-concept personalization,
leveraging a pre-trained text-to-image diffusion model. Our framework can
disentangle complex visual elements and attributes from as little as a single
image, while enabling seamless plug-and-play generation of combinations of
concepts extracted from multiple images. As opposed to existing works,
TokenVerse can handle multiple images with multiple concepts each, and supports
a wide-range of concepts, including objects, accessories, materials, pose, and
lighting. Our work exploits a DiT-based text-to-image model, in which the input
text affects the generation through both attention and modulation (shift and
scale). We observe that the modulation space is semantic and enables localized
control over complex concepts. Building on this insight, we devise an
optimization-based framework that takes as input an image and a text
description, and finds for each word a distinct direction in the modulation
space. These directions can then be used to generate new images that combine
the learned concepts in a desired configuration. We demonstrate the
effectiveness of TokenVerse in challenging personalization settings, and
showcase its advantages over existing methods. project's webpage in
https://token-verse.github.io/Summary
AI-Generated Summary