Bielik v3 小型版:技术报告
Bielik v3 Small: Technical Report
May 5, 2025
作者: Krzysztof Ociepa, Łukasz Flis, Remigiusz Kinas, Krzysztof Wróbel, Adrian Gwoździej
cs.AI
摘要
我们推出Bielik v3系列,这是一组专为波兰语处理优化的参数高效生成文本模型(1.5B和4.5B)。这些模型证明,经过精心优化的小型架构能够达到与更大规模模型相媲美的性能,同时显著减少计算资源需求。我们的方法融合了多项关键创新:定制波兰语分词器(APT4)显著提升了分词效率,加权指令交叉熵损失平衡了不同类型指令的学习,以及自适应学习率根据训练进度动态调整。这些模型在精心筛选的2920亿个令牌、涵盖3.03亿份文档的语料库上训练,在多个基准测试中表现卓越,包括Open PL LLM排行榜、复杂波兰语文本理解基准、波兰EQ-Bench及波兰医学排行榜。其中,4.5B参数模型取得了与规模为其2-3倍的模型相竞争的结果,而1.5B模型虽极度紧凑,仍展现出强劲性能。这些进展为在较少被代表的语言中实现参数高效的语言建模设立了新标杆,使得高质量波兰语AI在资源受限的应用中更加触手可及。
English
We introduce Bielik v3, a series of parameter-efficient generative text
models (1.5B and 4.5B) optimized for Polish language processing. These models
demonstrate that smaller, well-optimized architectures can achieve performance
comparable to much larger counterparts while requiring substantially fewer
computational resources. Our approach incorporates several key innovations: a
custom Polish tokenizer (APT4) that significantly improves token efficiency,
Weighted Instruction Cross-Entropy Loss to balance learning across instruction
types, and Adaptive Learning Rate that dynamically adjusts based on training
progress. Trained on a meticulously curated corpus of 292 billion tokens
spanning 303 million documents, these models excel across multiple benchmarks,
including the Open PL LLM Leaderboard, Complex Polish Text Understanding
Benchmark, Polish EQ-Bench, and Polish Medical Leaderboard. The 4.5B parameter
model achieves results competitive with models 2-3 times its size, while the
1.5B model delivers strong performance despite its extremely compact profile.
These advances establish new benchmarks for parameter-efficient language
modeling in less-represented languages, making high-quality Polish language AI
more accessible for resource-constrained applications.Summary
AI-Generated Summary