DDiT: Dynamic Resource Allocation for Diffusion Transformer Model Serving
DDiT: Dynamic Resource Allocation for Diffusion Transformer Model Serving
The Text-to-Video (T2V) model aims to generate dynamic and expressive videos from textual prompts. The generation pipeline typically involves multiple modules, such as language encoder, Diffusion Transformer (DiT), and Variational Autoencoders (VAE). Existing serving systems often rely on monolithic model deployment, while overlooking the distinct characteristics of each module, leading to inefficient GPU utilization. In addition, DiT exhibits varying performance gains across different resolutions and degrees of parallelism, and significant optimization potential remains unexplored. To address these problems, we present DDiT, a flexible system that integrates both inter-phase and intra-phase optimizations. DDiT focuses on two key metrics: optimal degree of parallelism, which prevents excessive parallelism for specific resolutions, and starvation time, which quantifies the sacrifice of each request. To this end, DDiT introduces a decoupled control mechanism to minimize the computational inefficiency caused by imbalances in the degree of parallelism between the DiT and VAE phases. It also designs a greedy resource allocation algorithm with a novel scheduling mechanism that operates at the single-step granularity, enabling dynamic and timely resource scaling. Our evaluation on the T5 encoder, OpenSora SDDiT, and OpenSora VAE models across diverse datasets reveals that DDiT significantly outperforms state-of-the-art baselines by up to 1.44x in p99 latency and 1.43x in average latency.
Heyang Huang、Cunchen Hu、Jiaqi Zhu、Ziyuan Gao、Liangliang Xu、Yizhou Shan、Yungang Bao、Sun Ninghui、Tianwei Zhang、Sa Wang
计算技术、计算机技术
Heyang Huang,Cunchen Hu,Jiaqi Zhu,Ziyuan Gao,Liangliang Xu,Yizhou Shan,Yungang Bao,Sun Ninghui,Tianwei Zhang,Sa Wang.DDiT: Dynamic Resource Allocation for Diffusion Transformer Model Serving[EB/OL].(2025-06-16)[2025-07-02].https://arxiv.org/abs/2506.13497.点此复制
评论