ChatPaper.aiChatPaper

Phi-4-Mini技術報告:通過混合LoRAs實現的緊湊而強大的多模態語言模型

Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs

March 3, 2025
作者: Abdelrahman Abouelenin, Atabak Ashfaq, Adam Atkinson, Hany Awadalla, Nguyen Bach, Jianmin Bao, Alon Benhaim, Martin Cai, Vishrav Chaudhary, Congcong Chen, Dong Chen, Dongdong Chen, Junkun Chen, Weizhu Chen, Yen-Chun Chen, Yi-ling Chen, Qi Dai, Xiyang Dai, Ruchao Fan, Mei Gao, Min Gao, Amit Garg, Abhishek Goswami, Junheng Hao, Amr Hendy, Yuxuan Hu, Xin Jin, Mahmoud Khademi, Dongwoo Kim, Young Jin Kim, Gina Lee, Jinyu Li, Yunsheng Li, Chen Liang, Xihui Lin, Zeqi Lin, Mengchen Liu, Yang Liu, Gilsinia Lopez, Chong Luo, Piyush Madan, Vadim Mazalov, Ali Mousavi, Anh Nguyen, Jing Pan, Daniel Perez-Becker, Jacob Platin, Thomas Portet, Kai Qiu, Bo Ren, Liliang Ren, Sambuddha Roy, Ning Shang, Yelong Shen, Saksham Singhal, Subhojit Som, Xia Song, Tetyana Sych, Praneetha Vaddamanu, Shuohang Wang, Yiming Wang, Zhenghao Wang, Haibin Wu, Haoran Xu, Weijian Xu, Yifan Yang, Ziyi Yang, Donghan Yu, Ishmam Zabir, Jianwen Zhang, Li Lyna Zhang, Yunan Zhang, Xiren Zhou
cs.AI

摘要

我們推出Phi-4-Mini與Phi-4-Multimodal,這是一組體積小巧卻能力出眾的語言與多模態模型。Phi-4-Mini是一款擁有38億參數的語言模型,基於高品質網路與合成數據訓練而成,在需要複雜推理的數學與編程任務上,其表現不僅大幅超越近期同規模的開源模型,更可匹敵體積是其兩倍的模型。這一成就得益於精心設計的合成數據配方,特別強調高質量的數學與編程數據集。相較於前代Phi-3.5-Mini,Phi-4-Mini的詞彙量擴展至20萬個token,以更好地支持多語言應用,並採用群組查詢注意力機制,提升長序列生成的效率。Phi-4-Multimodal則是一款多模態模型,將文本、視覺及語音/音頻輸入模式整合於單一模型之中。其創新的模態擴展方法利用LoRA適配器與模態專用路由器,實現多種模態的無干擾組合推理。例如,儘管其語音/音頻模態的LoRA組件僅有4.6億參數,該模型已在OpenASR排行榜上位居首位。Phi-4-Multimodal支持(視覺+語言)、(視覺+語音)及(語音/音頻)輸入場景,在多項任務上超越更大的視覺-語言與語音-語言模型。此外,我們還對Phi-4-Mini進行了進一步訓練實驗,以增強其推理能力。儘管這款實驗版模型僅有38億參數,其推理性能卻與或超越包括DeepSeek-R1-Distill-Qwen-7B與DeepSeek-R1-Distill-Llama-8B在內的更大模型。
English
We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language model trained on high-quality web and synthetic data, significantly outperforming recent open-source models of similar size and matching the performance of models twice its size on math and coding tasks requiring complex reasoning. This achievement is driven by a carefully curated synthetic data recipe emphasizing high-quality math and coding datasets. Compared to its predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of 200K tokens to better support multilingual applications, as well as group query attention for more efficient long-sequence generation. Phi-4-Multimodal is a multimodal model that integrates text, vision, and speech/audio input modalities into a single model. Its novel modality extension approach leverages LoRA adapters and modality-specific routers to allow multiple inference modes combining various modalities without interference. For example, it now ranks first in the OpenASR leaderboard to date, although the LoRA component of the speech/audio modality has just 460 million parameters. Phi-4-Multimodal supports scenarios involving (vision + language), (vision + speech), and (speech/audio) inputs, outperforming larger vision-language and speech-language models on a wide range of tasks. Additionally, we experiment to further train Phi-4-Mini to enhance its reasoning capabilities. Despite its compact 3.8-billion-parameter size, this experimental version achieves reasoning performance on par with or surpassing significantly larger models, including DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B.

Summary

AI-Generated Summary

PDF866March 4, 2025