Leveraging Consistent Spatio-Temporal Correspondence for Robust Visual Odometry
Leveraging Consistent Spatio-Temporal Correspondence for Robust Visual Odometry
Recent approaches to VO have significantly improved performance by using deep networks to predict optical flow between video frames. However, existing methods still suffer from noisy and inconsistent flow matching, making it difficult to handle challenging scenarios and long-sequence estimation. To overcome these challenges, we introduce Spatio-Temporal Visual Odometry (STVO), a novel deep network architecture that effectively leverages inherent spatio-temporal cues to enhance the accuracy and consistency of multi-frame flow matching. With more accurate and consistent flow matching, STVO can achieve better pose estimation through the bundle adjustment (BA). Specifically, STVO introduces two innovative components: 1) the Temporal Propagation Module that utilizes multi-frame information to extract and propagate temporal cues across adjacent frames, maintaining temporal consistency; 2) the Spatial Activation Module that utilizes geometric priors from the depth maps to enhance spatial consistency while filtering out excessive noise and incorrect matches. Our STVO achieves state-of-the-art performance on TUM-RGBD, EuRoc MAV, ETH3D and KITTI Odometry benchmarks. Notably, it improves accuracy by 77.8% on ETH3D benchmark and 38.9% on KITTI Odometry benchmark over the previous best methods.
Junda Cheng、Zhaoxing Zhang、Can Zhang、Gangwei Xu、Xin Yang、Xiaoxiang Wang
航空航天技术自动化技术、自动化技术设备计算技术、计算机技术
Junda Cheng,Zhaoxing Zhang,Can Zhang,Gangwei Xu,Xin Yang,Xiaoxiang Wang.Leveraging Consistent Spatio-Temporal Correspondence for Robust Visual Odometry[EB/OL].(2024-12-22)[2025-05-06].https://arxiv.org/abs/2412.16923.点此复制
评论