|国家预印本平台
首页|Lightweight Temporal Transformer Decomposition for Federated Autonomous Driving

Lightweight Temporal Transformer Decomposition for Federated Autonomous Driving

Lightweight Temporal Transformer Decomposition for Federated Autonomous Driving

来源:Arxiv_logoArxiv
英文摘要

Traditional vision-based autonomous driving systems often face difficulties in navigating complex environments when relying solely on single-image inputs. To overcome this limitation, incorporating temporal data such as past image frames or steering sequences, has proven effective in enhancing robustness and adaptability in challenging scenarios. While previous high-performance methods exist, they often rely on resource-intensive fusion networks, making them impractical for training and unsuitable for federated learning. To address these challenges, we propose lightweight temporal transformer decomposition, a method that processes sequential image frames and temporal steering data by breaking down large attention maps into smaller matrices. This approach reduces model complexity, enabling efficient weight updates for convergence and real-time predictions while leveraging temporal information to enhance autonomous driving performance. Intensive experiments on three datasets demonstrate that our method outperforms recent approaches by a clear margin while achieving real-time performance. Additionally, real robot experiments further confirm the effectiveness of our method.

Tuong Do、Binh X. Nguyen、Quang D. Tran、Erman Tjiputra、Te-Chuan Chiu、Anh Nguyen

自动化技术、自动化技术设备

Tuong Do,Binh X. Nguyen,Quang D. Tran,Erman Tjiputra,Te-Chuan Chiu,Anh Nguyen.Lightweight Temporal Transformer Decomposition for Federated Autonomous Driving[EB/OL].(2025-06-30)[2025-07-16].https://arxiv.org/abs/2506.23523.点此复制

评论