|国家预印本平台
首页|Taming Consistency Distillation for Accelerated Human Image Animation

Taming Consistency Distillation for Accelerated Human Image Animation

Taming Consistency Distillation for Accelerated Human Image Animation

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in human image animation have been propelled by video diffusion models, yet their reliance on numerous iterative denoising steps results in high inference costs and slow speeds. An intuitive solution involves adopting consistency models, which serve as an effective acceleration paradigm through consistency distillation. However, simply employing this strategy in human image animation often leads to quality decline, including visual blurring, motion degradation, and facial distortion, particularly in dynamic regions. In this paper, we propose the DanceLCM approach complemented by several enhancements to improve visual quality and motion continuity at low-step regime: (1) segmented consistency distillation with an auxiliary light-weight head to incorporate supervision from real video latents, mitigating cumulative errors resulting from single full-trajectory generation; (2) a motion-focused loss to centre on motion regions, and explicit injection of facial fidelity features to improve face authenticity. Extensive qualitative and quantitative experiments demonstrate that DanceLCM achieves results comparable to state-of-the-art video diffusion models with a mere 2-4 inference steps, significantly reducing the inference burden without compromising video quality. The code and models will be made publicly available.

Xiang Wang、Shiwei Zhang、Hangjie Yuan、Yujie Wei、Yingya Zhang、Changxin Gao、Yuehuan Wang、Nong Sang

计算技术、计算机技术

Xiang Wang,Shiwei Zhang,Hangjie Yuan,Yujie Wei,Yingya Zhang,Changxin Gao,Yuehuan Wang,Nong Sang.Taming Consistency Distillation for Accelerated Human Image Animation[EB/OL].(2025-04-15)[2025-05-25].https://arxiv.org/abs/2504.11143.点此复制

评论