ChatPaper.aiChatPaper

Jodi:通過聯合建模實現視覺生成與理解的統一

Jodi: Unification of Visual Generation and Understanding via Joint Modeling

May 25, 2025
作者: Yifeng Xu, Zhenliang He, Meina Kan, Shiguang Shan, Xilin Chen
cs.AI

摘要

視覺生成與理解是人類智能中兩個深度關聯的面向,然而在機器學習領域,它們傳統上被視為獨立的任務。本文提出Jodi,這是一個基於擴散模型的框架,通過聯合建模圖像域和多個標籤域,統一了視覺生成與理解。具體而言,Jodi建立在線性擴散變換器之上,並配備了角色切換機制,使其能夠執行三種特定類型的任務:(1)聯合生成,模型同時生成圖像和多個標籤;(2)可控生成,根據任意標籤組合生成圖像;(3)圖像感知,從給定圖像中一次性預測多個標籤。此外,我們還介紹了Joint-1.6M數據集,該數據集包含從公開來源收集的200,000張高質量圖像、7個視覺域的自動標籤以及由LLM生成的描述。大量實驗表明,Jodi在生成和理解任務上均表現出色,並展現出對更廣泛視覺域的強大擴展性。代碼可在https://github.com/VIPL-GENUN/Jodi獲取。
English
Visual generation and understanding are two deeply interconnected aspects of human intelligence, yet they have been traditionally treated as separate tasks in machine learning. In this paper, we propose Jodi, a diffusion framework that unifies visual generation and understanding by jointly modeling the image domain and multiple label domains. Specifically, Jodi is built upon a linear diffusion transformer along with a role switch mechanism, which enables it to perform three particular types of tasks: (1) joint generation, where the model simultaneously generates images and multiple labels; (2) controllable generation, where images are generated conditioned on any combination of labels; and (3) image perception, where multiple labels can be predicted at once from a given image. Furthermore, we present the Joint-1.6M dataset, which contains 200,000 high-quality images collected from public sources, automatic labels for 7 visual domains, and LLM-generated captions. Extensive experiments demonstrate that Jodi excels in both generation and understanding tasks and exhibits strong extensibility to a wider range of visual domains. Code is available at https://github.com/VIPL-GENUN/Jodi.

Summary

AI-Generated Summary

PDF202May 27, 2025