|国家预印本平台
首页|Image-to-Image Translation with Diffusion Transformers and CLIP-Based Image Conditioning

Image-to-Image Translation with Diffusion Transformers and CLIP-Based Image Conditioning

Image-to-Image Translation with Diffusion Transformers and CLIP-Based Image Conditioning

来源:Arxiv_logoArxiv
英文摘要

Image-to-image translation aims to learn a mapping between a source and a target domain, enabling tasks such as style transfer, appearance transformation, and domain adaptation. In this work, we explore a diffusion-based framework for image-to-image translation by adapting Diffusion Transformers (DiT), which combine the denoising capabilities of diffusion models with the global modeling power of transformers. To guide the translation process, we condition the model on image embeddings extracted from a pre-trained CLIP encoder, allowing for fine-grained and structurally consistent translations without relying on text or class labels. We incorporate both a CLIP similarity loss to enforce semantic consistency and an LPIPS perceptual loss to enhance visual fidelity during training. We validate our approach on two benchmark datasets: face2comics, which translates real human faces to comic-style illustrations, and edges2shoes, which translates edge maps to realistic shoe images. Experimental results demonstrate that DiT, combined with CLIP-based conditioning and perceptual similarity objectives, achieves high-quality, semantically faithful translations, offering a promising alternative to GAN-based models for paired image-to-image translation tasks.

Qiang Zhu、Kuan Lu、Menghao Huo、Yuxiao Li

计算技术、计算机技术

Qiang Zhu,Kuan Lu,Menghao Huo,Yuxiao Li.Image-to-Image Translation with Diffusion Transformers and CLIP-Based Image Conditioning[EB/OL].(2025-05-21)[2025-06-18].https://arxiv.org/abs/2505.16001.点此复制

评论