Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach
Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach
The ability to efficiently utilize crowdsourced visual data carries immense potential for the domains of large scale dynamic mapping and autonomous driving. However, state-of-the-art methods for crowdsourced 3D mapping assume prior knowledge of camera intrinsics. In this work, we propose a framework that estimates the 3D positions of semantically meaningful landmarks such as traffic signs without assuming known camera intrinsics, using only monocular color camera and GPS. We utilize multi-view geometry as well as deep learning based self-calibration, depth, and ego-motion estimation for traffic sign positioning, and show that combining their strengths is important for increasing the map coverage. To facilitate research on this task, we construct and make available a KITTI based 3D traffic sign ground truth positioning dataset. Using our proposed framework, we achieve an average single-journey relative and absolute positioning accuracy of 39cm and 1.26m respectively, on this dataset.
Hemang Chawla、Elahe Arani、Terence Brouns、Bahram Zonooz、Matti Jukola
10.1109/IROS45743.2020.9341243
公路运输工程通信
Hemang Chawla,Elahe Arani,Terence Brouns,Bahram Zonooz,Matti Jukola.Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach[EB/OL].(2020-07-25)[2025-08-02].https://arxiv.org/abs/2007.12918.点此复制
评论