|国家预印本平台
首页|MixCache: Mixture-of-Cache for Video Diffusion Transformer Acceleration

MixCache: Mixture-of-Cache for Video Diffusion Transformer Acceleration

MixCache: Mixture-of-Cache for Video Diffusion Transformer Acceleration

来源:Arxiv_logoArxiv
英文摘要

Leveraging the Transformer architecture and the diffusion process, video DiT models have emerged as a dominant approach for high-quality video generation. However, their multi-step iterative denoising process incurs high computational cost and inference latency. Caching, a widely adopted optimization method in DiT models, leverages the redundancy in the diffusion process to skip computations in different granularities (e.g., step, cfg, block). Nevertheless, existing caching methods are limited to single-granularity strategies, struggling to balance generation quality and inference speed in a flexible manner. In this work, we propose MixCache, a training-free caching-based framework for efficient video DiT inference. It first distinguishes the interference and boundary between different caching strategies, and then introduces a context-aware cache triggering strategy to determine when caching should be enabled, along with an adaptive hybrid cache decision strategy for dynamically selecting the optimal caching granularity. Extensive experiments on diverse models demonstrate that, MixCache can significantly accelerate video generation (e.g., 1.94$\times$ speedup on Wan 14B, 1.97$\times$ speedup on HunyuanVideo) while delivering both superior generation quality and inference efficiency compared to baseline methods.

Yuanxin Wei、Lansong Diao、Bujiao Chen、Shenggan Cheng、Zhengping Qian、Wenyuan Yu、Nong Xiao、Wei Lin、Jiangsu Du

计算技术、计算机技术

Yuanxin Wei,Lansong Diao,Bujiao Chen,Shenggan Cheng,Zhengping Qian,Wenyuan Yu,Nong Xiao,Wei Lin,Jiangsu Du.MixCache: Mixture-of-Cache for Video Diffusion Transformer Acceleration[EB/OL].(2025-08-18)[2025-09-04].https://arxiv.org/abs/2508.12691.点此复制

评论