|国家预印本平台
首页|Packing Input Frame Context in Next-Frame Prediction Models for Video Generation

Packing Input Frame Context in Next-Frame Prediction Models for Video Generation

Packing Input Frame Context in Next-Frame Prediction Models for Video Generation

来源:Arxiv_logoArxiv
英文摘要

We present a neural network structure, FramePack, to train next-frame (or next-frame-section) prediction models for video generation. The FramePack compresses input frames to make the transformer context length a fixed number regardless of the video length. As a result, we are able to process a large number of frames using video diffusion with computation bottleneck similar to image diffusion. This also makes the training video batch sizes significantly higher (batch sizes become comparable to image diffusion training). We also propose an anti-drifting sampling method that generates frames in inverted temporal order with early-established endpoints to avoid exposure bias (error accumulation over iterations). Finally, we show that existing video diffusion models can be finetuned with FramePack, and their visual quality may be improved because the next-frame prediction supports more balanced diffusion schedulers with less extreme flow shift timesteps.

Lvmin Zhang、Maneesh Agrawala

计算技术、计算机技术

Lvmin Zhang,Maneesh Agrawala.Packing Input Frame Context in Next-Frame Prediction Models for Video Generation[EB/OL].(2025-04-17)[2025-04-26].https://arxiv.org/abs/2504.12626.点此复制

评论