|国家预印本平台
首页|4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos

4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos

4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos

来源:Arxiv_logoArxiv
英文摘要

We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. We proposed a novel density control strategy in training, which enables our 4DGT to handle longer space-time input and remain efficient rendering at runtime. Our model processes 64 consecutive posed frames in a rolling-window fashion, predicting consistent 4D Gaussians in the scene. Unlike optimization-based methods, 4DGT performs purely feed-forward inference, reducing reconstruction time from hours to seconds and scaling effectively to long video sequences. Trained only on large-scale monocular posed video datasets, 4DGT can outperform prior Gaussian-based networks significantly in real-world videos and achieve on-par accuracy with optimization-based methods on cross-domain videos. Project page: https://4dgt.github.io

Zhen Xu、Zhengqin Li、Zhao Dong、Xiaowei Zhou、Richard Newcombe、Zhaoyang Lv

信息科学、信息技术计算技术、计算机技术系统科学、系统技术

Zhen Xu,Zhengqin Li,Zhao Dong,Xiaowei Zhou,Richard Newcombe,Zhaoyang Lv.4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos[EB/OL].(2025-06-09)[2025-06-23].https://arxiv.org/abs/2506.08015.点此复制

评论