GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction
GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction
Trajectory prediction for surrounding agents is a challenging task in autonomous driving due to its inherent uncertainty and underlying multimodality. Unlike prevailing data-driven methods that primarily rely on supervised learning, in this paper, we introduce a novel Graph-oriented Inverse Reinforcement Learning (GoIRL) framework, which is an IRL-based predictor equipped with vectorized context representations. We develop a feature adaptor to effectively aggregate lane-graph features into grid space, enabling seamless integration with the maximum entropy IRL paradigm to infer the reward distribution and obtain the policy that can be sampled to induce multiple plausible plans. Furthermore, conditioned on the sampled plans, we implement a hierarchical parameterized trajectory generator with a refinement module to enhance prediction accuracy and a probability fusion strategy to boost prediction confidence. Extensive experimental results showcase our approach not only achieves state-of-the-art performance on the large-scale Argoverse & nuScenes motion forecasting benchmarks but also exhibits superior generalization abilities compared to existing supervised models.
Muleilan Pei、Shaoshuai Shi、Lu Zhang、Peiliang Li、Shaojie Shen
自动化技术、自动化技术设备计算技术、计算机技术
Muleilan Pei,Shaoshuai Shi,Lu Zhang,Peiliang Li,Shaojie Shen.GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction[EB/OL].(2025-06-26)[2025-07-09].https://arxiv.org/abs/2506.21121.点此复制
评论