百川全方位1.5 技術報告
Baichuan-Omni-1.5 Technical Report
January 26, 2025
作者: Yadong Li, Jun Liu, Tao Zhang, Tao Zhang, Song Chen, Tianpeng Li, Zehuan Li, Lijun Liu, Lingfeng Ming, Guosheng Dong, Da Pan, Chong Li, Yuanbo Fang, Dongdong Kuang, Mingrui Wang, Chenglin Zhu, Youwei Zhang, Hongyu Guo, Fengyu Zhang, Yuran Wang, Bowen Ding, Wei Song, Xu Li, Yuqi Huo, Zheng Liang, Shusen Zhang, Xin Wu, Shuai Zhao, Linchu Xiong, Yozhen Wu, Jiahui Ye, Wenhao Lu, Bowen Li, Yan Zhang, Yaqi Zhou, Xin Chen, Lei Su, Hongda Zhang, Fuzhong Chen, Xuezhen Dong, Na Nie, Zhiying Wu, Bin Xiao, Ting Li, Shunya Dang, Ping Zhang, Yijia Sun, Jincheng Wu, Jinjie Yang, Xionghai Lin, Zhi Ma, Kegeng Wu, Jia li, Aiyuan Yang, Hui Liu, Jianqiang Zhang, Xiaoxi Chen, Guangwei Ai, Wentao Zhang, Yicong Chen, Xiaoqin Huang, Kun Li, Wenjing Luo, Yifei Duan, Lingling Zhu, Ran Xiao, Zhe Su, Jiani Pu, Dian Wang, Xu Jia, Tianyu Zhang, Mengyu Ai, Mang Wang, Yujing Qiao, Lei Zhang, Yanjun Shen, Fan Yang, Miao Zhen, Yijie Zhou, Mingyang Chen, Fei Li, Chenzheng Zhu, Keer Lu, Yaqi Zhao, Hao Liang, Youquan Li, Yanzhao Qin, Linzhuang Sun, Jianhua Xu, Haoze Sun, Mingan Lin, Zenan Zhou, Weipeng Chen
cs.AI
摘要
我們介紹了Baichuan-Omni-1.5,這是一個全模態模型,不僅具有全模態理解能力,還提供端到端的音頻生成能力。為了實現跨模態的流暢高質互動,同時不影響任何模態的能力,我們優化了三個關鍵方面。首先,我們為多模態數據建立了全面的數據清理和合成管道,獲得約500B的高質量數據(文本、音頻和視覺)。其次,我們設計了一個音頻分詞器(Baichuan-Audio-Tokenizer),用於從音頻中捕獲語義和聲學信息,實現與MLLM的無縫集成和增強兼容性。最後,我們設計了一種多階段訓練策略,逐步整合多模態對齊和多任務微調,確保各模態之間有效協同作用。Baichuan-Omni-1.5在全模態能力方面領先當代模型(包括GPT4o-mini和MiniCPM-o 2.6)。值得注意的是,它在各種多模態醫學基準測試中取得了與領先模型(如Qwen2-VL-72B)可比擬的結果。
English
We introduce Baichuan-Omni-1.5, an omni-modal model that not only has
omni-modal understanding capabilities but also provides end-to-end audio
generation capabilities. To achieve fluent and high-quality interaction across
modalities without compromising the capabilities of any modality, we
prioritized optimizing three key aspects. First, we establish a comprehensive
data cleaning and synthesis pipeline for multimodal data, obtaining about 500B
high-quality data (text, audio, and vision). Second, an audio-tokenizer
(Baichuan-Audio-Tokenizer) has been designed to capture both semantic and
acoustic information from audio, enabling seamless integration and enhanced
compatibility with MLLM. Lastly, we designed a multi-stage training strategy
that progressively integrates multimodal alignment and multitask fine-tuning,
ensuring effective synergy across all modalities. Baichuan-Omni-1.5 leads
contemporary models (including GPT4o-mini and MiniCPM-o 2.6) in terms of
comprehensive omni-modal capabilities. Notably, it achieves results comparable
to leading models such as Qwen2-VL-72B across various multimodal medical
benchmarks.Summary
AI-Generated Summary