DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation
DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation
Current generative models struggle to synthesize dynamic 4D driving scenes that simultaneously support temporal extrapolation and spatial novel view synthesis (NVS) without per-scene optimization. A key challenge lies in finding an efficient and generalizable geometric representation that seamlessly connects temporal and spatial synthesis. To address this, we propose DiST-4D, the first disentangled spatiotemporal diffusion framework for 4D driving scene generation, which leverages metric depth as the core geometric representation. DiST-4D decomposes the problem into two diffusion processes: DiST-T, which predicts future metric depth and multi-view RGB sequences directly from past observations, and DiST-S, which enables spatial NVS by training only on existing viewpoints while enforcing cycle consistency. This cycle consistency mechanism introduces a forward-backward rendering constraint, reducing the generalization gap between observed and unseen viewpoints. Metric depth is essential for both accurate reliable forecasting and accurate spatial NVS, as it provides a view-consistent geometric representation that generalizes well to unseen perspectives. Experiments demonstrate that DiST-4D achieves state-of-the-art performance in both temporal prediction and NVS tasks, while also delivering competitive performance in planning-related evaluations.
Jiazhe Guo、Yikang Ding、Xiwu Chen、Shuo Chen、Bohan Li、Yingshuang Zou、Xiaoyang Lyu、Feiyang Tan、Xiaojuan Qi、Zhiheng Li、Hao Zhao
计算技术、计算机技术
Jiazhe Guo,Yikang Ding,Xiwu Chen,Shuo Chen,Bohan Li,Yingshuang Zou,Xiaoyang Lyu,Feiyang Tan,Xiaojuan Qi,Zhiheng Li,Hao Zhao.DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation[EB/OL].(2025-03-19)[2025-05-07].https://arxiv.org/abs/2503.15208.点此复制
评论