|国家预印本平台
首页|Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval

Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval

Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval

来源:Arxiv_logoArxiv
英文摘要

Recent advances in interactive video generation have shown promising results, yet existing approaches struggle with scene-consistent memory capabilities in long video generation due to limited use of historical context. In this work, we propose Context-as-Memory, which utilizes historical context as memory for video generation. It includes two simple yet effective designs: (1) storing context in frame format without additional post-processing; (2) conditioning by concatenating context and frames to be predicted along the frame dimension at the input, requiring no external control modules. Furthermore, considering the enormous computational overhead of incorporating all historical context, we propose the Memory Retrieval module to select truly relevant context frames by determining FOV (Field of View) overlap between camera poses, which significantly reduces the number of candidate frames without substantial information loss. Experiments demonstrate that Context-as-Memory achieves superior memory capabilities in interactive long video generation compared to SOTAs, even generalizing effectively to open-domain scenarios not seen during training. The link of our project page is https://context-as-memory.github.io/.

Jiwen Yu、Jianhong Bai、Yiran Qin、Quande Liu、Xintao Wang、Pengfei Wan、Di Zhang、Xihui Liu

计算技术、计算机技术

Jiwen Yu,Jianhong Bai,Yiran Qin,Quande Liu,Xintao Wang,Pengfei Wan,Di Zhang,Xihui Liu.Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval[EB/OL].(2025-06-03)[2025-07-16].https://arxiv.org/abs/2506.03141.点此复制

评论