PETR: Position Embedding Transformation for Multi-View 3D Object Detection
PETR: Position Embedding Transformation for Multi-View 3D Object Detection
In this paper, we develop position embedding transformation (PETR) for multi-view 3D object detection. PETR encodes the position information of 3D coordinates into image features, producing the 3D position-aware features. Object query can perceive the 3D position-aware features and perform end-to-end object detection. PETR achieves state-of-the-art performance (50.4% NDS and 44.1% mAP) on standard nuScenes dataset and ranks 1st place on the benchmark. It can serve as a simple yet strong baseline for future research. Code is available at \url{https://github.com/megvii-research/PETR}.
Jian Sun、Tiancai Wang、Xiangyu Zhang、Yingfei Liu
计算技术、计算机技术
Jian Sun,Tiancai Wang,Xiangyu Zhang,Yingfei Liu.PETR: Position Embedding Transformation for Multi-View 3D Object Detection[EB/OL].(2022-03-10)[2025-05-21].https://arxiv.org/abs/2203.05625.点此复制
评论