|国家预印本平台
首页|Generative Dataset Distillation using Min-Max Diffusion Model

Generative Dataset Distillation using Min-Max Diffusion Model

Generative Dataset Distillation using Min-Max Diffusion Model

来源:Arxiv_logoArxiv
英文摘要

In this paper, we address the problem of generative dataset distillation that utilizes generative models to synthesize images. The generator may produce any number of images under a preserved evaluation time. In this work, we leverage the popular diffusion model as the generator to compute a surrogate dataset, boosted by a min-max loss to control the dataset's diversity and representativeness during training. However, the diffusion model is time-consuming when generating images, as it requires an iterative generation process. We observe a critical trade-off between the number of image samples and the image quality controlled by the diffusion steps and propose Diffusion Step Reduction to achieve optimal performance. This paper details our comprehensive method and its performance. Our model achieved $2^{nd}$ place in the generative track of \href{https://www.dd-challenge.com/#/}{The First Dataset Distillation Challenge of ECCV2024}, demonstrating its superior performance.

Junqiao Fan、Yunjiao Zhou、Min Chang Jordan Ren、Jianfei Yang

计算技术、计算机技术

Junqiao Fan,Yunjiao Zhou,Min Chang Jordan Ren,Jianfei Yang.Generative Dataset Distillation using Min-Max Diffusion Model[EB/OL].(2025-03-24)[2025-08-02].https://arxiv.org/abs/2503.18626.点此复制

评论