|国家预印本平台
首页|Effective Data Pruning through Score Extrapolation

Effective Data Pruning through Score Extrapolation

Effective Data Pruning through Score Extrapolation

来源:Arxiv_logoArxiv
英文摘要

Training advanced machine learning models demands massive datasets, resulting in prohibitive computational costs. To address this challenge, data pruning techniques identify and remove redundant training samples while preserving model performance. Yet, existing pruning techniques predominantly require a full initial training pass to identify removable samples, negating any efficiency benefits for single training runs. To overcome this limitation, we introduce a novel importance score extrapolation framework that requires training on only a small subset of data. We present two initial approaches in this framework - k-nearest neighbors and graph neural networks - to accurately predict sample importance for the entire dataset using patterns learned from this minimal subset. We demonstrate the effectiveness of our approach for 2 state-of-the-art pruning methods (Dynamic Uncertainty and TDDS), 4 different datasets (CIFAR-10, CIFAR-100, Places-365, and ImageNet), and 3 training paradigms (supervised, unsupervised, and adversarial). Our results indicate that score extrapolation is a promising direction to scale expensive score calculation methods, such as pruning, data attribution, or other tasks.

Sebastian Schmidt、Prasanga Dhungel、Christoffer L??ffler、Bj??rn Nieth、Stephan G??nnemann、Leo Schwinn

计算技术、计算机技术

Sebastian Schmidt,Prasanga Dhungel,Christoffer L??ffler,Bj??rn Nieth,Stephan G??nnemann,Leo Schwinn.Effective Data Pruning through Score Extrapolation[EB/OL].(2025-06-18)[2025-06-27].https://arxiv.org/abs/2506.09010.点此复制

评论