ChatPaper.aiChatPaper

MosaicFusion:擴散模型作為大詞彙實例分割的資料增強器

MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation

September 22, 2023
作者: Jiahao Xie, Wei Li, Xiangtai Li, Ziwei Liu, Yew Soon Ong, Chen Change Loy
cs.AI

摘要

我們提出了MosaicFusion,這是一種簡單而有效的基於擴散的資料增強方法,適用於大詞彙實例分割。我們的方法無需訓練,也不依賴任何標籤監督。兩個關鍵設計使我們能夠將現成的文本到圖像擴散模型作為有用的數據集生成器,用於對象實例和遮罩標註。首先,我們將圖像畫布劃分為幾個區域,並進行單輪擴散過程,同時條件是不同的文本提示,以同時生成多個實例。其次,我們通過聚合與對象提示相關聯的跨注意力地圖來獲取相應的實例遮罩,跨層和擴散時間步驟,然後進行簡單的閾值處理和邊緣感知的精細處理。沒有花哨的東西,我們的MosaicFusion可以為罕見和新類別生成大量的合成標記數據。在具有挑戰性的LVIS長尾和開放詞彙基準測試中的實驗結果表明,MosaicFusion可以顯著提高現有實例分割模型的性能,特別是對於罕見和新類別。代碼將在https://github.com/Jiahao000/MosaicFusion 上發布。
English
We present MosaicFusion, a simple yet effective diffusion-based data augmentation approach for large vocabulary instance segmentation. Our method is training-free and does not rely on any label supervision. Two key designs enable us to employ an off-the-shelf text-to-image diffusion model as a useful dataset generator for object instances and mask annotations. First, we divide an image canvas into several regions and perform a single round of diffusion process to generate multiple instances simultaneously, conditioning on different text prompts. Second, we obtain corresponding instance masks by aggregating cross-attention maps associated with object prompts across layers and diffusion time steps, followed by simple thresholding and edge-aware refinement processing. Without bells and whistles, our MosaicFusion can produce a significant amount of synthetic labeled data for both rare and novel categories. Experimental results on the challenging LVIS long-tailed and open-vocabulary benchmarks demonstrate that MosaicFusion can significantly improve the performance of existing instance segmentation models, especially for rare and novel categories. Code will be released at https://github.com/Jiahao000/MosaicFusion.
PDF91December 15, 2024