|国家预印本平台
首页|AutoMixer: Checkpoint Artifacts as Automatic Data Mixers

AutoMixer: Checkpoint Artifacts as Automatic Data Mixers

AutoMixer: Checkpoint Artifacts as Automatic Data Mixers

来源:Arxiv_logoArxiv
英文摘要

In language model training, it is desirable to equip models with capabilities from various tasks. However, it is not clear how to directly obtain the right data mixtures for these capabilities as the relationship between data and tasks is difficult to be modeled. In this work, we observe that checkpoint models exhibit emerging capabilities at different points in the training trajectory. Often, the training process saves checkpoints as artifacts that are under-utilized as a source of in-training data signals. We identify these artifact models based on their respective capabilities on the benchmarks and leverage them as data mixers by using their aggregated first-order influence approximation over source data. We demonstrated on eight reasoning benchmarks that the proposed framework shows significant improvements in the pretraining setting, with performance improvements of up to 1.93%. Overall, this shows the potential of checkpoint models to enhance data quality and optimize data mixtures.

Ernie Chang、Yang Li、Patrick Huber、David Kant、Yangyang Shi、Vikas Chandra

计算技术、计算机技术

Ernie Chang,Yang Li,Patrick Huber,David Kant,Yangyang Shi,Vikas Chandra.AutoMixer: Checkpoint Artifacts as Automatic Data Mixers[EB/OL].(2025-06-27)[2025-07-20].https://arxiv.org/abs/2506.21910.点此复制

评论