Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration
Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration
Diffusion transformers have shown exceptional performance in visual generation but incur high computational costs. Token reduction techniques that compress models by sharing the denoising process among similar tokens have been introduced. However, existing approaches neglect the denoising priors of the diffusion models, leading to suboptimal acceleration and diminished image quality. This study proposes a novel concept: attend to prune feature redundancies in areas not attended by the diffusion process. We analyze the location and degree of feature redundancies based on the structure-then-detail denoising priors. Subsequently, we introduce SDTM, a structure-then-detail token merging approach that dynamically compresses feature redundancies. Specifically, we design dynamic visual token merging, compression ratio adjusting, and prompt reweighting for different stages. Served in a post-training way, the proposed method can be integrated seamlessly into any DiT architecture. Extensive experiments across various backbones, schedulers, and datasets showcase the superiority of our method, for example, it achieves 1.55 times acceleration with negligible impact on image quality. Project page: https://github.com/ICTMCG/SDTM.
Haipeng Fang、Sheng Tang、Juan Cao、Enshuo Zhang、Fan Tang、Tong-Yee Lee
计算技术、计算机技术
Haipeng Fang,Sheng Tang,Juan Cao,Enshuo Zhang,Fan Tang,Tong-Yee Lee.Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration[EB/OL].(2025-05-16)[2025-06-09].https://arxiv.org/abs/2505.11707.点此复制
评论