|国家预印本平台
首页|Domain-Adaptive Continued Pre-Training of Small Language Models

Domain-Adaptive Continued Pre-Training of Small Language Models

Domain-Adaptive Continued Pre-Training of Small Language Models

来源:Arxiv_logoArxiv
英文摘要

Continued pre-training of small language models offers a promising path for domain adaptation with limited computational resources. I've investigated this approach within educational domains, evaluating it as a resource-efficient alternative to training models from scratch. Using a 125M parameter model, I demonstrate significant performance improvements through incremental training on 400 million tokens, followed by further training to reach 1 billion tokens. My approach includes comprehensive data preprocessing, memory-optimized training configurations, and benchmark-based evaluation. Results show notable gains in knowledge-intensive tasks (MMLU +8.1%) and contextual understanding (HellaSwag +7.6%), while revealing educational domain specialization trade-offs. I analyze token efficiency, catastrophic forgetting mitigation strategies, and scaling patterns. My findings suggest that thoughtful preprocessing and training methodologies enable meaningful improvements in language model capabilities even with constrained computational resources, opening pathways for domain-specific adaptation of smaller language models.

Salman Faroz

计算技术、计算机技术

Salman Faroz.Domain-Adaptive Continued Pre-Training of Small Language Models[EB/OL].(2025-04-13)[2025-06-24].https://arxiv.org/abs/2504.09687.点此复制

评论