FABRIC:透過迭代反饋個性化擴散模型
FABRIC: Personalizing Diffusion Models with Iterative Feedback
July 19, 2023
作者: Dimitri von Rütte, Elisabetta Fedele, Jonathan Thomm, Lukas Wolf
cs.AI
摘要
在視覺內容生成日益受機器學習驅動的時代,將人類反饋整合到生成模型中,為提升用戶體驗和輸出品質帶來重大機遇。本研究探討將迭代人類反饋納入基於擴散的文本到圖像模型生成過程的策略。我們提出了一種名為FABRIC的無需訓練的方法,適用於廣泛的熱門擴散模型,利用最廣泛使用的架構中的自注意力層,將擴散過程條件化為一組反饋圖像。為確保對我們方法的嚴格評估,我們引入了一種全面的評估方法,提供了一個強大的機制來量化整合人類反饋的生成視覺模型的性能。我們通過詳盡分析展示,通過多輪迭代反饋,生成結果得到改善,從而隱式優化任意用戶偏好。這些發現的潛在應用延伸至個性化內容創建和定制等領域。
English
In an era where visual content generation is increasingly driven by machine
learning, the integration of human feedback into generative models presents
significant opportunities for enhancing user experience and output quality.
This study explores strategies for incorporating iterative human feedback into
the generative process of diffusion-based text-to-image models. We propose
FABRIC, a training-free approach applicable to a wide range of popular
diffusion models, which exploits the self-attention layer present in the most
widely used architectures to condition the diffusion process on a set of
feedback images. To ensure a rigorous assessment of our approach, we introduce
a comprehensive evaluation methodology, offering a robust mechanism to quantify
the performance of generative visual models that integrate human feedback. We
show that generation results improve over multiple rounds of iterative feedback
through exhaustive analysis, implicitly optimizing arbitrary user preferences.
The potential applications of these findings extend to fields such as
personalized content creation and customization.