明快全能:面向多模態感知與生成的稀疏統一架構
Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
October 28, 2025
作者: Inclusion AI, Bowen Ma, Cheng Zou, Canxiang Yan, Chunxiang Jin, Chunjie Shen, Dandan Zheng, Fudong Wang, Furong Xu, GuangMing Yao, Jun Zhou, Jingdong Chen, Jianing Li, Jianxin Sun, Jiajia Liu, Jianjiang Zhu, Jianping Jiang, Jun Peng, Kaixiang Ji, Kaimeng Ren, Libin Wang, Lixiang Ru, Longhua Tan, Lan Wang, Mochen Bai, Ning Gao, Qingpei Guo, Qinglong Zhang, Qiang Xu, Rui Liu, Ruijie Xiong, Ruobing Zheng, Sirui Gao, Tianqi Li, Tinghao Liu, Weilong Chai, Xinyu Xiao, Xiaomei Wang, Xiaolong Wang, Xiao Lu, Xiaoyu Li, Xingning Dong, Xuzheng Yu, Yi Yuan, Yuting Gao, Yuting Xiao, Yunxiao Sun, Yipeng Chen, Yifan Mao, Yifei Wu, Yongjie Lyu, Ziping Ma, Zhiqiang Fang, Zhihao Qiu, Ziyuan Huang, Zizheng Yang, Zhengyu He
cs.AI
摘要
我們提出明瞬全向模型(Ming-Flash-Omni),作為明全向模型(Ming-Omni)的升級版本。該模型基於靈瞬2.0(Ling-Flash-2.0)的稀疏混合專家架構變體構建,總參數量達1000億,但每個令牌僅激活61億參數。此架構實現了高效擴展(在顯著提升計算效率的同時大幅擴展模型容量),並強化跨視覺、語音與語言的統一多模態智能,標誌著邁向人工通用智能(AGI)的關鍵一步。相較於前代模型,升級版在多模態理解與生成任務上均展現顯著提升:我們大幅推進語音識別能力,在上下文語音識別任務中達到最先進性能,並在方言感知語音識別中取得極具競爭力的結果;在圖像生成方面,明瞬全向模型實現高保真文字渲染,並在圖像編輯的場景一致性與身份特徵保持方面獲得明顯進步。此外,該模型創新性引入生成式分割技術,不僅具備強大的獨立分割性能,更可增強圖像生成的空間控制能力並提升編輯一致性。值得強調的是,明瞬全向模型在文本到圖像生成與生成式分割任務中均達到最先進水平,並在全部12項上下文語音識別基準測試中刷新紀錄,所有成果均通過單一統一架構實現。
English
We propose Ming-Flash-Omni, an upgraded version of Ming-Omni, built upon a
sparser Mixture-of-Experts (MoE) variant of Ling-Flash-2.0 with 100 billion
total parameters, of which only 6.1 billion are active per token. This
architecture enables highly efficient scaling (dramatically improving
computational efficiency while significantly expanding model capacity) and
empowers stronger unified multimodal intelligence across vision, speech, and
language, representing a key step toward Artificial General Intelligence (AGI).
Compared to its predecessor, the upgraded version exhibits substantial
improvements across multimodal understanding and generation. We significantly
advance speech recognition capabilities, achieving state-of-the-art performance
in contextual ASR and highly competitive results in dialect-aware ASR. In image
generation, Ming-Flash-Omni introduces high-fidelity text rendering and
demonstrates marked gains in scene consistency and identity preservation during
image editing. Furthermore, Ming-Flash-Omni introduces generative segmentation,
a capability that not only achieves strong standalone segmentation performance
but also enhances spatial control in image generation and improves editing
consistency. Notably, Ming-Flash-Omni achieves state-of-the-art results in
text-to-image generation and generative segmentation, and sets new records on
all 12 contextual ASR benchmarks, all within a single unified architecture.