|国家预印本平台
首页|Training-Free Motion Customization for Distilled Video Generators with Adaptive Test-Time Distillation

Training-Free Motion Customization for Distilled Video Generators with Adaptive Test-Time Distillation

Training-Free Motion Customization for Distilled Video Generators with Adaptive Test-Time Distillation

来源:Arxiv_logoArxiv
英文摘要

Distilled video generation models offer fast and efficient synthesis but struggle with motion customization when guided by reference videos, especially under training-free settings. Existing training-free methods, originally designed for standard diffusion models, fail to generalize due to the accelerated generative process and large denoising steps in distilled models. To address this, we propose MotionEcho, a novel training-free test-time distillation framework that enables motion customization by leveraging diffusion teacher forcing. Our approach uses high-quality, slow teacher models to guide the inference of fast student models through endpoint prediction and interpolation. To maintain efficiency, we dynamically allocate computation across timesteps according to guidance needs. Extensive experiments across various distilled video generation models and benchmark datasets demonstrate that our method significantly improves motion fidelity and generation quality while preserving high efficiency. Project page: https://euminds.github.io/motionecho/

Jintao Rong、Xin Xie、Xinyi Yu、Linlin Ou、Xinyu Zhang、Chunhua Shen、Dong Gong

计算技术、计算机技术

Jintao Rong,Xin Xie,Xinyi Yu,Linlin Ou,Xinyu Zhang,Chunhua Shen,Dong Gong.Training-Free Motion Customization for Distilled Video Generators with Adaptive Test-Time Distillation[EB/OL].(2025-06-24)[2025-07-16].https://arxiv.org/abs/2506.19348.点此复制

评论