ChatPaper.aiChatPaper

Diffree:使用擴散模型進行文本引導的形狀自由物體修補

Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model

July 24, 2024
作者: Lirui Zhao, Tianshuo Yang, Wenqi Shao, Yuxin Zhang, Yu Qiao, Ping Luo, Kaipeng Zhang, Rongrong Ji
cs.AI

摘要

本文討論了一個重要的問題,即如何在僅有文本指導的情況下為圖像添加物件。這是一個具有挑戰性的問題,因為新物件必須與圖像無縫集成,並具有一致的視覺背景,如光線、紋理和空間位置。雖然現有的文本引導圖像修補方法可以添加物件,但它們要麼無法保持背景一致性,要麼需要繁瑣的人類干預,例如指定邊界框或用戶涂鴉遮罩。為應對這一挑戰,我們引入了Diffree,一個文本到圖像(T2I)模型,可通過僅有文本控制來促進文本引導的物件添加。為此,我們通過先進的圖像修補技術刪除物件,精心編輯了OABench,一個精美的合成數據集。OABench包含74K個現實世界元組,包括原始圖像、去除物件後的修補圖像、物件遮罩和物件描述。通過在OABench上使用穩定擴散模型和額外的遮罩預測模塊進行訓練,Diffree獨特地預測新物件的位置,並實現僅通過文本引導的物件添加。大量實驗表明,Diffree在高成功率下添加新物件的同時,能夠保持背景一致性、空間適當性以及物件的相關性和質量。
English
This paper addresses an important problem of object addition for images with only text guidance. It is challenging because the new object must be integrated seamlessly into the image with consistent visual context, such as lighting, texture, and spatial location. While existing text-guided image inpainting methods can add objects, they either fail to preserve the background consistency or involve cumbersome human intervention in specifying bounding boxes or user-scribbled masks. To tackle this challenge, we introduce Diffree, a Text-to-Image (T2I) model that facilitates text-guided object addition with only text control. To this end, we curate OABench, an exquisite synthetic dataset by removing objects with advanced image inpainting techniques. OABench comprises 74K real-world tuples of an original image, an inpainted image with the object removed, an object mask, and object descriptions. Trained on OABench using the Stable Diffusion model with an additional mask prediction module, Diffree uniquely predicts the position of the new object and achieves object addition with guidance from only text. Extensive experiments demonstrate that Diffree excels in adding new objects with a high success rate while maintaining background consistency, spatial appropriateness, and object relevance and quality.

Summary

AI-Generated Summary

PDF432November 28, 2024