Exploring Diffusion with Test-Time Training on Efficient Image Restoration
Exploring Diffusion with Test-Time Training on Efficient Image Restoration
Image restoration faces challenges including ineffective feature fusion, computational bottlenecks and inefficient diffusion processes. To address these, we propose DiffRWKVIR, a novel framework unifying Test-Time Training (TTT) with efficient diffusion. Our approach introduces three key innovations: (1) Omni-Scale 2D State Evolution extends RWKV's location-dependent parameterization to hierarchical multi-directional 2D scanning, enabling global contextual awareness with linear complexity O(L); (2) Chunk-Optimized Flash Processing accelerates intra-chunk parallelism by 3.2x via contiguous chunk processing (O(LCd) complexity), reducing sequential dependencies and computational overhead; (3) Prior-Guided Efficient Diffusion extracts a compact Image Prior Representation (IPR) in only 5-20 steps, proving 45% faster training/inference than DiffIR while solving computational inefficiency in denoising. Evaluated across super-resolution and inpainting benchmarks (Set5, Set14, BSD100, Urban100, Places365), DiffRWKVIR outperforms SwinIR, HAT, and MambaIR/v2 in PSNR, SSIM, LPIPS, and efficiency metrics. Our method establishes a new paradigm for adaptive, high-efficiency image restoration with optimized hardware utilization.
Rongchang Lu、Tianduo Luo、Yunzhi Jiang、Conghan Yue、Pei Yang、Guibao Liu、Changyang Gu
计算技术、计算机技术
Rongchang Lu,Tianduo Luo,Yunzhi Jiang,Conghan Yue,Pei Yang,Guibao Liu,Changyang Gu.Exploring Diffusion with Test-Time Training on Efficient Image Restoration[EB/OL].(2025-06-22)[2025-07-16].https://arxiv.org/abs/2506.14541.点此复制
评论