ChatPaper.aiChatPaper

哈拉技术报告:大规模构建以阿拉伯语为中心的指令与翻译模型

Hala Technical Report: Building Arabic-Centric Instruction & Translation Models at Scale

September 17, 2025
作者: Hasan Abed Al Kader Hammoud, Mohammad Zbeeb, Bernard Ghanem
cs.AI

摘要

我们推出了Hala系列,这是一组以阿拉伯语为核心的指令与翻译模型,采用我们独特的翻译调优流程构建。首先,我们将一个强大的阿拉伯语-英语双向教师模型压缩至FP8精度(实现约两倍的吞吐量提升且无质量损失),并利用其生成高保真的双语监督数据。随后,一个轻量级语言模型LFM2-1.2B在此数据上进行微调,用于将高质量的英文指令集翻译成阿拉伯语,从而生成一个百万量级、专为指令跟随定制的语料库。我们训练了参数规模分别为350M、700M、1.2B和9B的Hala模型,并应用球面线性插值(slerp)融合技术,以平衡阿拉伯语特性与基础模型优势。在以阿拉伯语为核心的基准测试中,Hala在“纳米级”(≤2B)和“小型”(7-9B)类别中均取得了最先进的成果,超越了其基础模型。我们公开了模型、数据、评估方法及训练配方,以加速阿拉伯语自然语言处理领域的研究进展。
English
We present Hala, a family of Arabic-centric instruction and translation models built with our translate-and-tune pipeline. We first compress a strong ARleftrightarrowEN teacher to FP8 (yielding sim2times higher throughput with no quality loss) and use it to create high-fidelity bilingual supervision. A lightweight language model LFM2-1.2B is then fine-tuned on this data and used to translate high-quality English instruction sets into Arabic, producing a million-scale corpus tailored to instruction following. We train Hala models at 350M, 700M, 1.2B, and 9B parameters, and apply slerp merging to balance Arabic specialization with base-model strengths. On Arabic-centric benchmarks, Hala achieves state-of-the-art results within both the "nano" (leq2B) and "small" (7-9B) categories, outperforming their bases. We release models, data, evaluation, and recipes to accelerate research in Arabic NLP.
PDF541September 18, 2025