AdaDeDup: Adaptive Hybrid Data Pruning for Efficient Large-Scale Object Detection Training
AdaDeDup: Adaptive Hybrid Data Pruning for Efficient Large-Scale Object Detection Training
The computational burden and inherent redundancy of large-scale datasets challenge the training of contemporary machine learning models. Data pruning offers a solution by selecting smaller, informative subsets, yet existing methods struggle: density-based approaches can be task-agnostic, while model-based techniques may introduce redundancy or prove computationally prohibitive. We introduce Adaptive De-Duplication (AdaDeDup), a novel hybrid framework that synergistically integrates density-based pruning with model-informed feedback in a cluster-adaptive manner. AdaDeDup first partitions data and applies an initial density-based pruning. It then employs a proxy model to evaluate the impact of this initial pruning within each cluster by comparing losses on kept versus pruned samples. This task-aware signal adaptively adjusts cluster-specific pruning thresholds, enabling more aggressive pruning in redundant clusters while preserving critical data in informative ones. Extensive experiments on large-scale object detection benchmarks (Waymo, COCO, nuScenes) using standard models (BEVFormer, Faster R-CNN) demonstrate AdaDeDup's advantages. It significantly outperforms prominent baselines, substantially reduces performance degradation (e.g., over 54% versus random sampling on Waymo), and achieves near-original model performance while pruning 20% of data, highlighting its efficacy in enhancing data efficiency for large-scale model training. Code is open-sourced.
Feiyang Kang、Nadine Chang、Maying Shen、Marc T. Law、Rafid Mahmood、Ruoxi Jia、Jose M. Alvarez
计算技术、计算机技术
Feiyang Kang,Nadine Chang,Maying Shen,Marc T. Law,Rafid Mahmood,Ruoxi Jia,Jose M. Alvarez.AdaDeDup: Adaptive Hybrid Data Pruning for Efficient Large-Scale Object Detection Training[EB/OL].(2025-06-24)[2025-07-16].https://arxiv.org/abs/2507.00049.点此复制
评论