|国家预印本平台
首页|Improving Model Classification by Optimizing the Training Dataset

Improving Model Classification by Optimizing the Training Dataset

Improving Model Classification by Optimizing the Training Dataset

来源:Arxiv_logoArxiv
英文摘要

In the era of data-centric AI, the ability to curate high-quality training data is as crucial as model design. Coresets offer a principled approach to data reduction, enabling efficient learning on large datasets through importance sampling. However, conventional sensitivity-based coreset construction often falls short in optimizing for classification performance metrics, e.g., $F1$ score, focusing instead on loss approximation. In this work, we present a systematic framework for tuning the coreset generation process to enhance downstream classification quality. Our method introduces new tunable parameters--including deterministic sampling, class-wise allocation, and refinement via active sampling, beyond traditional sensitivity scores. Through extensive experiments on diverse datasets and classifiers, we demonstrate that tuned coresets can significantly outperform both vanilla coresets and full dataset training on key classification metrics, offering an effective path towards better and more efficient model training.

Morad Tukan、Loay Mualem、Eitan Netzer、Liran Sigalat

计算技术、计算机技术

Morad Tukan,Loay Mualem,Eitan Netzer,Liran Sigalat.Improving Model Classification by Optimizing the Training Dataset[EB/OL].(2025-07-22)[2025-08-10].https://arxiv.org/abs/2507.16729.点此复制

评论