StateSpaceDiffuser: Bringing Long Context to Diffusion World Models
StateSpaceDiffuser: Bringing Long Context to Diffusion World Models
World models have recently become promising tools for predicting realistic visuals based on actions in complex environments. However, their reliance on a short sequence of observations causes them to quickly lose track of context. As a result, visual consistency breaks down after just a few steps, and generated scenes no longer reflect information seen earlier. This limitation of the state-of-the-art diffusion-based world models comes from their lack of a lasting environment state. To address this problem, we introduce StateSpaceDiffuser, where a diffusion model is enabled to perform on long-context tasks by integrating a sequence representation from a state-space model (Mamba), representing the entire interaction history. This design restores long-term memory without sacrificing the high-fidelity synthesis of diffusion models. To rigorously measure temporal consistency, we develop an evaluation protocol that probes a model's ability to reinstantiate seen content in extended rollouts. Comprehensive experiments show that StateSpaceDiffuser significantly outperforms a strong diffusion-only baseline, maintaining a coherent visual context for an order of magnitude more steps. It delivers consistent views in both a 2D maze navigation and a complex 3D environment. These results establish that bringing state-space representations into diffusion models is highly effective in demonstrating both visual details and long-term memory.
Nedko Savov、Naser Kazemi、Deheng Zhang、Danda Pani Paudel、Xi Wang、Luc Van Gool
计算技术、计算机技术
Nedko Savov,Naser Kazemi,Deheng Zhang,Danda Pani Paudel,Xi Wang,Luc Van Gool.StateSpaceDiffuser: Bringing Long Context to Diffusion World Models[EB/OL].(2025-05-28)[2025-06-23].https://arxiv.org/abs/2505.22246.点此复制
评论