ChatPaper.aiChatPaper

Bielik v3 小型版:技術報告

Bielik v3 Small: Technical Report

May 5, 2025
作者: Krzysztof Ociepa, Łukasz Flis, Remigiusz Kinas, Krzysztof Wróbel, Adrian Gwoździej
cs.AI

摘要

我們推出Bielik v3,這是一系列專為波蘭語處理優化的參數高效生成文本模型(1.5B和4.5B)。這些模型展示了經過良好優化的較小架構能夠在顯著減少計算資源需求的同時,實現與更大模型相媲美的性能。我們的方法融合了多項關鍵創新:一個顯著提升標記效率的定制波蘭語分詞器(APT4)、用於平衡各類指令學習的加權指令交叉熵損失,以及根據訓練進度動態調整的自適應學習率。這些模型基於精心挑選的2920億標記、涵蓋3.03億份文件的語料庫進行訓練,在多個基準測試中表現卓越,包括Open PL LLM Leaderboard、Complex Polish Text Understanding Benchmark、Polish EQ-Bench和Polish Medical Leaderboard。其中,4.5B參數模型的成績可與其規模2-3倍的模型競爭,而1.5B模型則在極其緊湊的配置下仍展現出強勁性能。這些進展為在代表性不足的語言中進行參數高效語言建模設立了新標杆,使得高質量的波蘭語AI技術更易於在資源受限的應用中普及。
English
We introduce Bielik v3, a series of parameter-efficient generative text models (1.5B and 4.5B) optimized for Polish language processing. These models demonstrate that smaller, well-optimized architectures can achieve performance comparable to much larger counterparts while requiring substantially fewer computational resources. Our approach incorporates several key innovations: a custom Polish tokenizer (APT4) that significantly improves token efficiency, Weighted Instruction Cross-Entropy Loss to balance learning across instruction types, and Adaptive Learning Rate that dynamically adjusts based on training progress. Trained on a meticulously curated corpus of 292 billion tokens spanning 303 million documents, these models excel across multiple benchmarks, including the Open PL LLM Leaderboard, Complex Polish Text Understanding Benchmark, Polish EQ-Bench, and Polish Medical Leaderboard. The 4.5B parameter model achieves results competitive with models 2-3 times its size, while the 1.5B model delivers strong performance despite its extremely compact profile. These advances establish new benchmarks for parameter-efficient language modeling in less-represented languages, making high-quality Polish language AI more accessible for resource-constrained applications.

Summary

AI-Generated Summary

PDF552May 12, 2025