Task-Specific Generative Dataset Distillation with Difficulty-Guided Sampling
Task-Specific Generative Dataset Distillation with Difficulty-Guided Sampling
To alleviate the reliance of deep neural networks on large-scale datasets, dataset distillation aims to generate compact, high-quality synthetic datasets that can achieve comparable performance to the original dataset. The integration of generative models has significantly advanced this field. However, existing approaches primarily focus on aligning the distilled dataset with the original one, often overlooking task-specific information that can be critical for optimal downstream performance. In this paper, focusing on the downstream task of classification, we propose a task-specific sampling strategy for generative dataset distillation that incorporates the concept of difficulty to consider the requirements of the target task better. The final dataset is sampled from a larger image pool with a sampling distribution obtained by matching the difficulty distribution of the original dataset. A logarithmic transformation is applied as a pre-processing step to correct for distributional bias. The results of extensive experiments demonstrate the effectiveness of our method and suggest its potential for enhancing performance on other downstream tasks.
Mingzhuo Li、Guang Li、Jiafeng Mao、Linfeng Ye、Takahiro Ogawa、Miki Haseyama
计算技术、计算机技术
Mingzhuo Li,Guang Li,Jiafeng Mao,Linfeng Ye,Takahiro Ogawa,Miki Haseyama.Task-Specific Generative Dataset Distillation with Difficulty-Guided Sampling[EB/OL].(2025-07-04)[2025-07-16].https://arxiv.org/abs/2507.03331.点此复制
评论