One-Step Diffusion-Based Image Compression with Semantic Distillation
One-Step Diffusion-Based Image Compression with Semantic Distillation
While recent diffusion-based generative image codecs have shown impressive performance, their iterative sampling process introduces unpleasing latency. In this work, we revisit the design of a diffusion-based codec and argue that multi-step sampling is not necessary for generative compression. Based on this insight, we propose OneDC, a One-step Diffusion-based generative image Codec -- that integrates a latent compression module with a one-step diffusion generator. Recognizing the critical role of semantic guidance in one-step diffusion, we propose using the hyperprior as a semantic signal, overcoming the limitations of text prompts in representing complex visual content. To further enhance the semantic capability of the hyperprior, we introduce a semantic distillation mechanism that transfers knowledge from a pretrained generative tokenizer to the hyperprior codec. Additionally, we adopt a hybrid pixel- and latent-domain optimization to jointly enhance both reconstruction fidelity and perceptual realism. Extensive experiments demonstrate that OneDC achieves SOTA perceptual quality even with one-step generation, offering over 40% bitrate reduction and 20x faster decoding compared to prior multi-step diffusion-based codecs. Code will be released later.
Naifu Xue、Zhaoyang Jia、Jiahao Li、Bin Li、Yuan Zhang、Yan Lu
计算技术、计算机技术
Naifu Xue,Zhaoyang Jia,Jiahao Li,Bin Li,Yuan Zhang,Yan Lu.One-Step Diffusion-Based Image Compression with Semantic Distillation[EB/OL].(2025-05-22)[2025-06-29].https://arxiv.org/abs/2505.16687.点此复制
评论