ChatPaper.aiChatPaper

展示:一個單一Transformer統一多模態理解與生成

Show-o: One Single Transformer to Unify Multimodal Understanding and Generation

August 22, 2024
作者: Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, Mike Zheng Shou
cs.AI

摘要

我們提出了一個統一的Transformer模型,即Show-o,它統一了多模態理解和生成。與完全自回歸模型不同,Show-o將自回歸和(離散)擴散建模統一起來,以自適應地處理各種和混合模態的輸入和輸出。這個統一模型靈活地支持廣泛的視覺語言任務,包括視覺問答、文本到圖像生成、文本引導的修補/外推,以及混合模態生成。在各種基準測試中,它展示了與現有針對理解或生成而定制的具有相同或更多參數的個別模型相當或更優越的性能。這顯著突顯了它作為下一代基礎模型的潛力。代碼和模型已在https://github.com/showlab/Show-o 上發布。
English
We present a unified transformer, i.e., Show-o, that unifies multimodal understanding and generation. Unlike fully autoregressive models, Show-o unifies autoregressive and (discrete) diffusion modeling to adaptively handle inputs and outputs of various and mixed modalities. The unified model flexibly supports a wide range of vision-language tasks including visual question-answering, text-to-image generation, text-guided inpainting/extrapolation, and mixed-modality generation. Across various benchmarks, it demonstrates comparable or superior performance to existing individual models with an equivalent or larger number of parameters tailored for understanding or generation. This significantly highlights its potential as a next-generation foundation model. Code and models are released at https://github.com/showlab/Show-o.

Summary

AI-Generated Summary

PDF522November 16, 2024