|国家预印本平台
首页|EAM: Enhancing Anything with Diffusion Transformers for Blind Super-Resolution

EAM: Enhancing Anything with Diffusion Transformers for Blind Super-Resolution

EAM: Enhancing Anything with Diffusion Transformers for Blind Super-Resolution

来源:Arxiv_logoArxiv
英文摘要

Utilizing pre-trained Text-to-Image (T2I) diffusion models to guide Blind Super-Resolution (BSR) has become a predominant approach in the field. While T2I models have traditionally relied on U-Net architectures, recent advancements have demonstrated that Diffusion Transformers (DiT) achieve significantly higher performance in this domain. In this work, we introduce Enhancing Anything Model (EAM), a novel BSR method that leverages DiT and outperforms previous U-Net-based approaches. We introduce a novel block, $Ψ$-DiT, which effectively guides the DiT to enhance image restoration. This block employs a low-resolution latent as a separable flow injection control, forming a triple-flow architecture that effectively leverages the prior knowledge embedded in the pre-trained DiT. To fully exploit the prior guidance capabilities of T2I models and enhance their generalization in BSR, we introduce a progressive Masked Image Modeling strategy, which also reduces training costs. Additionally, we propose a subject-aware prompt generation strategy that employs a robust multi-modal model in an in-context learning framework. This strategy automatically identifies key image areas, provides detailed descriptions, and optimizes the utilization of T2I diffusion priors. Our experiments demonstrate that EAM achieves state-of-the-art results across multiple datasets, outperforming existing methods in both quantitative metrics and visual quality.

Jie Hu、Kunpeng Du、Qiangyu Yan、Sen Lu、Jianhong Han、Hanting Chen、Hailin Hu、Haizhen Xie

计算技术、计算机技术

Jie Hu,Kunpeng Du,Qiangyu Yan,Sen Lu,Jianhong Han,Hanting Chen,Hailin Hu,Haizhen Xie.EAM: Enhancing Anything with Diffusion Transformers for Blind Super-Resolution[EB/OL].(2025-07-05)[2025-07-16].https://arxiv.org/abs/2505.05209.点此复制

评论