明眸全智:面向多模态感知与生成的稀疏统一架构
Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
October 28, 2025
作者: Inclusion AI, Bowen Ma, Cheng Zou, Canxiang Yan, Chunxiang Jin, Chunjie Shen, Dandan Zheng, Fudong Wang, Furong Xu, GuangMing Yao, Jun Zhou, Jingdong Chen, Jianing Li, Jianxin Sun, Jiajia Liu, Jianjiang Zhu, Jianping Jiang, Jun Peng, Kaixiang Ji, Kaimeng Ren, Libin Wang, Lixiang Ru, Longhua Tan, Lan Wang, Mochen Bai, Ning Gao, Qingpei Guo, Qinglong Zhang, Qiang Xu, Rui Liu, Ruijie Xiong, Ruobing Zheng, Sirui Gao, Tianqi Li, Tinghao Liu, Weilong Chai, Xinyu Xiao, Xiaomei Wang, Xiaolong Wang, Xiao Lu, Xiaoyu Li, Xingning Dong, Xuzheng Yu, Yi Yuan, Yuting Gao, Yuting Xiao, Yunxiao Sun, Yipeng Chen, Yifan Mao, Yifei Wu, Yongjie Lyu, Ziping Ma, Zhiqiang Fang, Zhihao Qiu, Ziyuan Huang, Zizheng Yang, Zhengyu He
cs.AI
摘要
我们提出Ming-Flash-Omni,作为Ming-Omni的升级版本,其基于Ling-Flash-2.0的稀疏混合专家(MoE)变体构建,总参数量达1000亿,其中每个令牌仅激活61亿参数。该架构实现了高效扩展(在显著提升计算效率的同时大幅扩展模型容量),并增强了跨视觉、语音和语言的统一多模态智能,标志着向通用人工智能(AGI)迈进的关键一步。相较于前代模型,升级版在多模态理解与生成方面均取得显著提升。我们大幅推进语音识别能力,在上下文语音识别(ASR)中达到顶尖水平,在方言感知ASR中亦取得极具竞争力的结果。在图像生成方面,Ming-Flash-Omni实现了高保真文本渲染,并在图像编辑的场景一致性与身份保持方面展现出明显进步。此外,该模型创新性地引入生成式分割能力,不仅具备强大的独立分割性能,还能增强图像生成的空间控制力并提升编辑一致性。值得关注的是,Ming-Flash-Omni在文本到图像生成和生成式分割任务中均达到业界最优水平,并在全部12项上下文ASR基准测试中刷新纪录,所有功能均通过单一统一架构实现。
English
We propose Ming-Flash-Omni, an upgraded version of Ming-Omni, built upon a
sparser Mixture-of-Experts (MoE) variant of Ling-Flash-2.0 with 100 billion
total parameters, of which only 6.1 billion are active per token. This
architecture enables highly efficient scaling (dramatically improving
computational efficiency while significantly expanding model capacity) and
empowers stronger unified multimodal intelligence across vision, speech, and
language, representing a key step toward Artificial General Intelligence (AGI).
Compared to its predecessor, the upgraded version exhibits substantial
improvements across multimodal understanding and generation. We significantly
advance speech recognition capabilities, achieving state-of-the-art performance
in contextual ASR and highly competitive results in dialect-aware ASR. In image
generation, Ming-Flash-Omni introduces high-fidelity text rendering and
demonstrates marked gains in scene consistency and identity preservation during
image editing. Furthermore, Ming-Flash-Omni introduces generative segmentation,
a capability that not only achieves strong standalone segmentation performance
but also enhances spatial control in image generation and improves editing
consistency. Notably, Ming-Flash-Omni achieves state-of-the-art results in
text-to-image generation and generative segmentation, and sets new records on
all 12 contextual ASR benchmarks, all within a single unified architecture.