Sortblock: Similarity-Aware Feature Reuse for Diffusion Model
Sortblock: Similarity-Aware Feature Reuse for Diffusion Model
Diffusion Transformers (DiTs) have demonstrated remarkable generative capabilities, particularly benefiting from Transformer architectures that enhance visual and artistic fidelity. However, their inherently sequential denoising process results in high inference latency, limiting their deployment in real-time scenarios. Existing training-free acceleration approaches typically reuse intermediate features at fixed timesteps or layers, overlooking the evolving semantic focus across denoising stages and Transformer blocks.To address this, we propose Sortblock, a training-free inference acceleration framework that dynamically caches block-wise features based on their similarity across adjacent timesteps. By ranking the evolution of residuals, Sortblock adaptively determines a recomputation ratio, selectively skipping redundant computations while preserving generation quality. Furthermore, we incorporate a lightweight linear prediction mechanism to reduce accumulated errors in skipped blocks.Extensive experiments across various tasks and DiT architectures demonstrate that Sortblock achieves over 2$\times$ inference speedup with minimal degradation in output quality, offering an effective and generalizable solution for accelerating diffusion-based generative models.
Hanqi Chen、Xu Zhang、Xiaoliu Guan、Lielin Jiang、Guanzhong Wang、Zeyu Chen、Yi Liu
计算技术、计算机技术
Hanqi Chen,Xu Zhang,Xiaoliu Guan,Lielin Jiang,Guanzhong Wang,Zeyu Chen,Yi Liu.Sortblock: Similarity-Aware Feature Reuse for Diffusion Model[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2508.00412.点此复制
评论