|国家预印本平台
首页|Distribution-aware Dataset Distillation for Efficient Image Restoration

Distribution-aware Dataset Distillation for Efficient Image Restoration

Distribution-aware Dataset Distillation for Efficient Image Restoration

来源:Arxiv_logoArxiv
英文摘要

With the exponential increase in image data, training an image restoration model is laborious. Dataset distillation is a potential solution to this problem, yet current distillation techniques are a blank canvas in the field of image restoration. To fill this gap, we propose the Distribution-aware Dataset Distillation method (TripleD), a new framework that extends the principles of dataset distillation to image restoration. Specifically, TripleD uses a pre-trained vision Transformer to extract features from images for complexity evaluation, and the subset (the number of samples is much smaller than the original training set) is selected based on complexity. The selected subset is then fed through a lightweight CNN that fine-tunes the image distribution to align with the distribution of the original dataset at the feature level. To efficiently condense knowledge, the training is divided into two stages. Early stages focus on simpler, low-complexity samples to build foundational knowledge, while later stages select more complex and uncertain samples as the model matures. Our method achieves promising performance on multiple image restoration tasks, including multi-task image restoration, all-in-one image restoration, and ultra-high-definition image restoration tasks. Note that we can train a state-of-the-art image restoration model on an ultra-high-definition (4K resolution) dataset using only one consumer-grade GPU in less than 8 hours (500 savings in computing resources and immeasurable training time).

Xiuyi Jia、Zhuoran Zheng、Xin Su、Chen Wu

计算技术、计算机技术

Xiuyi Jia,Zhuoran Zheng,Xin Su,Chen Wu.Distribution-aware Dataset Distillation for Efficient Image Restoration[EB/OL].(2025-04-20)[2025-05-02].https://arxiv.org/abs/2504.14826.点此复制

评论