|国家预印本平台
首页|LTDA-Drive: LLMs-guided Generative Models based Long-tail Data Augmentation for Autonomous Driving

LTDA-Drive: LLMs-guided Generative Models based Long-tail Data Augmentation for Autonomous Driving

LTDA-Drive: LLMs-guided Generative Models based Long-tail Data Augmentation for Autonomous Driving

来源:Arxiv_logoArxiv
英文摘要

3D perception plays an essential role for improving the safety and performance of autonomous driving. Yet, existing models trained on real-world datasets, which naturally exhibit long-tail distributions, tend to underperform on rare and safety-critical, vulnerable classes, such as pedestrians and cyclists. Existing studies on reweighting and resampling techniques struggle with the scarcity and limited diversity within tail classes. To address these limitations, we introduce LTDA-Drive, a novel LLM-guided data augmentation framework designed to synthesize diverse, high-quality long-tail samples. LTDA-Drive replaces head-class objects in driving scenes with tail-class objects through a three-stage process: (1) text-guided diffusion models remove head-class objects, (2) generative models insert instances of the tail classes, and (3) an LLM agent filters out low-quality synthesized images. Experiments conducted on the KITTI dataset show that LTDA-Drive significantly improves tail-class detection, achieving 34.75\% improvement for rare classes over counterpart methods. These results further highlight the effectiveness of LTDA-Drive in tackling long-tail challenges by generating high-quality and diverse data.

Mahmut Yurt、Xin Ye、Yunsheng Ma、Jingru Luo、Abhirup Mallik、John Pauly、Burhaneddin Yaman、Liu Ren

自动化技术、自动化技术设备计算技术、计算机技术

Mahmut Yurt,Xin Ye,Yunsheng Ma,Jingru Luo,Abhirup Mallik,John Pauly,Burhaneddin Yaman,Liu Ren.LTDA-Drive: LLMs-guided Generative Models based Long-tail Data Augmentation for Autonomous Driving[EB/OL].(2025-05-21)[2025-07-02].https://arxiv.org/abs/2505.18198.点此复制

评论