|国家预印本平台
首页|Enhancing Long Video Generation Consistency without Tuning

Enhancing Long Video Generation Consistency without Tuning

Enhancing Long Video Generation Consistency without Tuning

来源:Arxiv_logoArxiv
英文摘要

Despite the considerable progress achieved in the long video generation problem, there is still significant room to improve the consistency of the generated videos, particularly in terms of their smoothness and transitions between scenes. We address these issues to enhance the consistency and coherence of videos generated with either single or multiple prompts. We propose the Time-frequency based temporal Attention Reweighting Algorithm (TiARA), which judiciously edits the attention score matrix based on the Discrete Short-Time Fourier Transform. This method is supported by a frequency-based analysis, ensuring that the edited attention score matrix achieves improved consistency across frames. It represents the first-of-its-kind for frequency-based methods in video diffusion models. For videos generated by multiple prompts, we further uncover key factors such as the alignment of the prompts affecting prompt interpolation quality. Inspired by our analyses, we propose PromptBlend, an advanced prompt interpolation pipeline that systematically aligns the prompts. Extensive experimental results validate the efficacy of our proposed method, demonstrating consistent and substantial improvements over multiple baselines.

Zhuoran Yang、Xingyao Li、Fengzhuo Zhang、Jiachun Pan、Yunlong Hou、Vincent Y. F. Tan

计算技术、计算机技术

Zhuoran Yang,Xingyao Li,Fengzhuo Zhang,Jiachun Pan,Yunlong Hou,Vincent Y. F. Tan.Enhancing Long Video Generation Consistency without Tuning[EB/OL].(2025-07-07)[2025-07-16].https://arxiv.org/abs/2412.17254.点此复制

评论