Event-based Graph Representation with Spatial and Motion Vectors for Asynchronous Object Detection
Event-based Graph Representation with Spatial and Motion Vectors for Asynchronous Object Detection
Event-based sensors offer high temporal resolution and low latency by generating sparse, asynchronous data. However, converting this irregular data into dense tensors for use in standard neural networks diminishes these inherent advantages, motivating research into graph representations. While such methods preserve sparsity and support asynchronous inference, their performance on downstream tasks remains limited due to suboptimal modeling of spatiotemporal dynamics. In this work, we propose a novel spatiotemporal multigraph representation to better capture spatial structure and temporal changes. Our approach constructs two decoupled graphs: a spatial graph leveraging B-spline basis functions to model global structure, and a temporal graph utilizing motion vector-based attention for local dynamic changes. This design enables the use of efficient 2D kernels in place of computationally expensive 3D kernels. We evaluate our method on the Gen1 automotive and eTraM datasets for event-based object detection, achieving over a 6% improvement in detection accuracy compared to previous graph-based works, with a 5x speedup, reduced parameter count, and no increase in computational cost. These results highlight the effectiveness of structured graph modeling for asynchronous vision. Project page: eventbasedvision.github.io/eGSMV.
Aayush Atul Verma、Arpitsinh Vaghela、Bharatesh Chakravarthi、Kaustav Chanda、Yezhou Yang
计算技术、计算机技术
Aayush Atul Verma,Arpitsinh Vaghela,Bharatesh Chakravarthi,Kaustav Chanda,Yezhou Yang.Event-based Graph Representation with Spatial and Motion Vectors for Asynchronous Object Detection[EB/OL].(2025-07-20)[2025-08-10].https://arxiv.org/abs/2507.15150.点此复制
评论