ToReMi: Topic-Aware Data Reweighting for Dynamic Pre-Training Data Selection
ToReMi: Topic-Aware Data Reweighting for Dynamic Pre-Training Data Selection
Pre-training large language models (LLMs) necessitates enormous diverse textual corpora, making effective data selection a key challenge for balancing computational resources and model performance. Current methodologies primarily emphasize data quality metrics and mixing proportions, yet they fail to adequately capture the underlying semantic connections between training samples and quality disparities within individual domains. We introduce ToReMi (Topic-based Reweighting for Model improvement), a novel two-stage framework that dynamically adjusts training sample weights according to their topical associations and observed learning patterns. Our comprehensive experiments reveal that ToReMi variants consistently achieve superior performance over conventional pre-training approaches, demonstrating accelerated perplexity reduction across multiple domains and enhanced capabilities on downstream evaluation tasks. Code is available at https://github.com/zxx000728/ToReMi.
Xiaoxuan Zhu、Zhouhong Gu、Baiqian Wu、Suhang Zheng、Tao Wang、Tianyu Li、Hongwei Feng、Yanghua Xiao
计算技术、计算机技术
Xiaoxuan Zhu,Zhouhong Gu,Baiqian Wu,Suhang Zheng,Tao Wang,Tianyu Li,Hongwei Feng,Yanghua Xiao.ToReMi: Topic-Aware Data Reweighting for Dynamic Pre-Training Data Selection[EB/OL].(2025-04-01)[2025-04-26].https://arxiv.org/abs/2504.00695.点此复制
评论