Bielik v3 Small: Technical Report
Bielik v3 Small: Technical Report
We introduce Bielik v3, a series of parameter-efficient generative text models (1.5B and 4.5B) optimized for Polish language processing. These models demonstrate that smaller, well-optimized architectures can achieve performance comparable to much larger counterparts while requiring substantially fewer computational resources. Our approach incorporates several key innovations: a custom Polish tokenizer (APT4) that significantly improves token efficiency, Weighted Instruction Cross-Entropy Loss to balance learning across instruction types, and Adaptive Learning Rate that dynamically adjusts based on training progress. Trained on a meticulously curated corpus of 292 billion tokens spanning 303 million documents, these models excel across multiple benchmarks, including the Open PL LLM Leaderboard, Complex Polish Text Understanding Benchmark, Polish EQ-Bench, and Polish Medical Leaderboard. The 4.5B parameter model achieves results competitive with models 2-3 times its size, while the 1.5B model delivers strong performance despite its extremely compact profile. These advances establish new benchmarks for parameter-efficient language modeling in less-represented languages, making high-quality Polish language AI more accessible for resource-constrained applications.
Krzysztof Ociepa、?ukasz Flis、Remigiusz Kinas、Krzysztof Wróbel、Adrian Gwo?dziej
常用外国语
Krzysztof Ociepa,?ukasz Flis,Remigiusz Kinas,Krzysztof Wróbel,Adrian Gwo?dziej.Bielik v3 Small: Technical Report[EB/OL].(2025-05-05)[2025-06-06].https://arxiv.org/abs/2505.02550.点此复制
评论