|国家预印本平台
首页|Rewriting Pre-Training Data Boosts LLM Performance in Math and Code

Rewriting Pre-Training Data Boosts LLM Performance in Math and Code

Rewriting Pre-Training Data Boosts LLM Performance in Math and Code

来源:Arxiv_logoArxiv
英文摘要

The performance of large language models (LLMs) in program synthesis and mathematical reasoning is fundamentally limited by the quality of their pre-training corpora. We introduce two openly licensed datasets, released under the Llama 3.3 Community License, that significantly enhance LLM performance by systematically rewriting public data. SwallowCode (approximately 16.1 billion tokens) refines Python snippets from The-Stack-v2 through a novel four-stage pipeline: syntax validation, pylint-based style filtering, and a two-stage LLM rewriting process that enforces style conformity and transforms snippets into self-contained, algorithmically efficient examples. Unlike prior methods that rely on exclusionary filtering or limited transformations, our transform-and-retain approach upgrades low-quality code, maximizing data utility. SwallowMath (approximately 2.3 billion tokens) enhances Finemath-4+ by removing boilerplate, restoring context, and reformatting solutions into concise, step-by-step explanations. Within a fixed 50 billion token training budget, continual pre-training of Llama-3.1-8B with SwallowCode boosts pass@1 by +17.0 on HumanEval and +17.7 on HumanEval+ compared to Stack-Edu, surpassing the baseline model's code generation capabilities. Similarly, substituting SwallowMath yields +12.4 accuracy on GSM8K and +7.6 on MATH. Ablation studies confirm that each pipeline stage contributes incrementally, with rewriting delivering the largest gains. All datasets, prompts, and checkpoints are publicly available, enabling reproducible research and advancing LLM pre-training for specialized domains.

Kazuki Fujii、Yukito Tajima、Sakae Mizuki、Hinari Shimada、Taihei Shiotani、Koshiro Saito、Masanari Ohi、Masaki Kawamura、Taishi Nakamura、Takumi Okamoto、Shigeki Ishida、Kakeru Hattori、Youmi Ma、Hiroya Takamura、Rio Yokota、Naoaki Okazaki

计算技术、计算机技术

Kazuki Fujii,Yukito Tajima,Sakae Mizuki,Hinari Shimada,Taihei Shiotani,Koshiro Saito,Masanari Ohi,Masaki Kawamura,Taishi Nakamura,Takumi Okamoto,Shigeki Ishida,Kakeru Hattori,Youmi Ma,Hiroya Takamura,Rio Yokota,Naoaki Okazaki.Rewriting Pre-Training Data Boosts LLM Performance in Math and Code[EB/OL].(2025-05-05)[2025-05-18].https://arxiv.org/abs/2505.02881.点此复制

评论