ChatPaper.aiChatPaper

EXAONE 3.0 7.8B 指令调优语言模型

EXAONE 3.0 7.8B Instruction Tuned Language Model

August 7, 2024
作者: LG AI Research, Soyoung An, Kyunghoon Bae, Eunbi Choi, Stanley Jungkyu Choi, Yemuk Choi, Seokhee Hong, Yeonjung Hong, Junwon Hwang, Hyojin Jeon, Gerrard Jeongwon Jo, Hyunjik Jo, Jiyeon Jung, Yountae Jung, Euisoon Kim, Hyosang Kim, Joonkee Kim, Seonghwan Kim, Soyeon Kim, Sunkyoung Kim, Yireun Kim, Youchul Kim, Edward Hwayoung Lee, Haeju Lee, Honglak Lee, Jinsik Lee, Kyungmin Lee, Moontae Lee, Seungjun Lee, Woohyung Lim, Sangha Park, Sooyoun Park, Yongmin Park, Boseong Seo, Sihoon Yang, Heuiyeen Yeen, Kyungjae Yoo, Hyeongu Yun
cs.AI

摘要

我们介绍了EXAONE 3.0指令调整语言模型,这是LG AI Research开发的大语言模型(LLMs)系列中的第一个开放模型。在不同的模型尺寸中,我们公开发布了78亿个指令调整模型,以促进开放研究和创新。通过在各种公共和内部基准测试中进行广泛评估,EXAONE 3.0展示了与其他同等大小最先进开放模型相比具有高度竞争力的真实世界性能,具有遵循指令的能力。我们的比较分析显示,EXAONE 3.0在韩语方面表现出色,同时在一般任务和复杂推理方面取得了引人注目的表现。凭借其强大的真实世界有效性和双语能力,我们希望EXAONE继续为专家AI的进步做出贡献。我们的EXAONE 3.0指令调整模型可在以下网址获得:https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct
English
We introduce EXAONE 3.0 instruction-tuned language model, the first open model in the family of Large Language Models (LLMs) developed by LG AI Research. Among different model sizes, we publicly release the 7.8B instruction-tuned model to promote open research and innovations. Through extensive evaluations across a wide range of public and in-house benchmarks, EXAONE 3.0 demonstrates highly competitive real-world performance with instruction-following capability against other state-of-the-art open models of similar size. Our comparative analysis shows that EXAONE 3.0 excels particularly in Korean, while achieving compelling performance across general tasks and complex reasoning. With its strong real-world effectiveness and bilingual proficiency, we hope that EXAONE keeps contributing to advancements in Expert AI. Our EXAONE 3.0 instruction-tuned model is available at https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct

Summary

AI-Generated Summary

PDF363November 28, 2024