ChatPaper.aiChatPaper

HARE:人類先驗,小型語言模型效率的關鍵

HARE: HumAn pRiors, a key to small language model Efficiency

June 17, 2024
作者: Lingyun Zhang, Bin jin, Gaojian Ge, Lunhui Liu, Xuewen Shen, Mingyong Wu, Houqian Zhang, Yongneng Jiang, Shiqi Chen, Shi Pu
cs.AI

摘要

人類先驗在深度學習中扮演著重要角色,能有效利用數據。然而,隨著大型語言模型(LLMs)的發展,越來越強調模型大小和數據量的擴展,這往往會降低人類先驗在數據構建中的重要性。受這些趨勢影響,現有的小型語言模型(SLMs)主要依賴於網絡抓取的大規模訓練數據,忽略了適當融入人類先驗的重要性。這一疏忽限制了語言模型在資源受限環境中的訓練效率。本文提出了一項利用人類先驗進行數據構建的原則。該原則強調通過在一個既包含語義多樣性又保持數據質量一致性的簡潔數據集上進行訓練,避免基準數據泄漏,以實現高性能SLMs。根據這一原則,我們訓練了一個名為HARE-1.1B的SLM。對大規模基準數據集的大量實驗表明,HARE-1.1B在性能上優於最先進的SLMs,驗證了所提出原則的有效性。此外,從人類先驗的角度提供了在資源受限環境中進行有效語言模型訓練的新見解。
English
Human priors play a crucial role in efficiently utilizing data in deep learning. However, with the development of large language models (LLMs), there is an increasing emphasis on scaling both model size and data volume, which often diminishes the importance of human priors in data construction. Influenced by these trends, existing Small Language Models (SLMs) mainly rely on web-scraped large-scale training data, neglecting the proper incorporation of human priors. This oversight limits the training efficiency of language models in resource-constrained settings. In this paper, we propose a principle to leverage human priors for data construction. This principle emphasizes achieving high-performance SLMs by training on a concise dataset that accommodates both semantic diversity and data quality consistency, while avoiding benchmark data leakage. Following this principle, we train an SLM named HARE-1.1B. Extensive experiments on large-scale benchmark datasets demonstrate that HARE-1.1B performs favorably against state-of-the-art SLMs, validating the effectiveness of the proposed principle. Additionally, this provides new insights into efficient language model training in resource-constrained environments from the view of human priors.

Summary

AI-Generated Summary

PDF401December 2, 2024