Fast2comm:Collaborative perception combined with prior knowledge
Fast2comm:Collaborative perception combined with prior knowledge
Collaborative perception has the potential to significantly enhance perceptual accuracy through the sharing of complementary information among agents. However, real-world collaborative perception faces persistent challenges, particularly in balancing perception performance and bandwidth limitations, as well as coping with localization errors. To address these challenges, we propose Fast2comm, a prior knowledge-based collaborative perception framework. Specifically, (1)we propose a prior-supervised confidence feature generation method, that effectively distinguishes foreground from background by producing highly discriminative confidence features; (2)we propose GT Bounding Box-based spatial prior feature selection strategy to ensure that only the most informative prior-knowledge features are selected and shared, thereby minimizing background noise and optimizing bandwidth efficiency while enhancing adaptability to localization inaccuracies; (3)we decouple the feature fusion strategies between model training and testing phases, enabling dynamic bandwidth adaptation. To comprehensively validate our framework, we conduct extensive experiments on both real-world and simulated datasets. The results demonstrate the superior performance of our model and highlight the necessity of the proposed methods. Our code is available at https://github.com/Zhangzhengbin-TJ/Fast2comm.
Zhengbin Zhang、Yan Wu、Hongkun Zhang
通信无线通信电子技术应用
Zhengbin Zhang,Yan Wu,Hongkun Zhang.Fast2comm:Collaborative perception combined with prior knowledge[EB/OL].(2025-04-29)[2025-06-19].https://arxiv.org/abs/2505.00740.点此复制
评论