小型语言模型调查
A Survey of Small Language Models
October 25, 2024
作者: Chien Van Nguyen, Xuan Shen, Ryan Aponte, Yu Xia, Samyadeep Basu, Zhengmian Hu, Jian Chen, Mihir Parmar, Sasidhar Kunapuli, Joe Barrow, Junda Wu, Ashish Singh, Yu Wang, Jiuxiang Gu, Franck Dernoncourt, Nesreen K. Ahmed, Nedim Lipka, Ruiyi Zhang, Xiang Chen, Tong Yu, Sungchul Kim, Hanieh Deilamsalehy, Namyong Park, Mike Rimer, Zhehao Zhang, Huanrui Yang, Ryan A. Rossi, Thien Huu Nguyen
cs.AI
摘要
由于其高效性和性能,小型语言模型(SLMs)变得越来越重要,能够利用最少的计算资源执行各种语言任务,使其在包括设备端、移动设备、边缘设备等各种环境中成为理想选择。在本文中,我们提供了对SLMs的全面调查,重点关注它们的架构、训练技术和模型压缩技术。我们提出了一个新颖的分类法,用于对优化SLMs的方法进行分类,包括模型压缩、修剪和量化技术。我们总结了用于对SLMs进行基准测试的基准数据集,以及常用的评估指标。此外,我们强调了仍需解决的关键挑战。我们的调查旨在成为对开发和部署小型且高效语言模型感兴趣的研究人员和从业者的宝贵资源。
English
Small Language Models (SLMs) have become increasingly important due to their
efficiency and performance to perform various language tasks with minimal
computational resources, making them ideal for various settings including
on-device, mobile, edge devices, among many others. In this article, we present
a comprehensive survey on SLMs, focusing on their architectures, training
techniques, and model compression techniques. We propose a novel taxonomy for
categorizing the methods used to optimize SLMs, including model compression,
pruning, and quantization techniques. We summarize the benchmark datasets that
are useful for benchmarking SLMs along with the evaluation metrics commonly
used. Additionally, we highlight key open challenges that remain to be
addressed. Our survey aims to serve as a valuable resource for researchers and
practitioners interested in developing and deploying small yet efficient
language models.Summary
AI-Generated Summary