Phi-3 技術報告:在您的手機上本地運行的高性能語言模型

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

April 22, 2024
作者: Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Olatunji Ruwase, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yunan Zhang, Xiren Zhou
cs.AI

摘要

我們介紹了 phi-3-mini,這是一個擁有 380 億個參數的語言模型,訓練過程中使用了 3.3 兆個 tokens。無論是在學術基準測試還是內部測試中,其整體表現都與 Mixtral 8x7B 和 GPT-3.5 這樣的模型不相上下(例如,phi-3-mini 在 MMLU 上達到 69%,在 MT-bench 上達到 8.38),儘管體積小到可以部署在手機上。創新完全來自我們用於訓練的數據集,這是 phi-2 用過的數據集的規模擴大版本,由經過嚴格篩選的網絡數據和合成數據組成。該模型還進一步經過調整以確保韌性、安全性和對話格式。我們還提供了一些初始的參數縮放結果,使用了 4800 兆 tokens 訓練的 70 億和 140 億模型,分別稱為 phi-3-small 和 phi-3-medium,它們的能力遠遠超過 phi-3-mini(例如,在 MMLU 上分別達到 75% 和 78%,在 MT-bench 上分別達到 8.7 和 8.9)。
English
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on MT-bench).

Summary

AI-Generated Summary

PDF25742December 15, 2024