|国家预印本平台
首页|Large Language Models Can Achieve Explainable and Training-Free One-shot HRRP ATR

Large Language Models Can Achieve Explainable and Training-Free One-shot HRRP ATR

Large Language Models Can Achieve Explainable and Training-Free One-shot HRRP ATR

来源:Arxiv_logoArxiv
英文摘要

This letter introduces a pioneering, training-free and explainable framework for High-Resolution Range Profile (HRRP) automatic target recognition (ATR) utilizing large-scale pre-trained Large Language Models (LLMs). Diverging from conventional methods requiring extensive task-specific training or fine-tuning, our approach converts one-dimensional HRRP signals into textual scattering center representations. Prompts are designed to align LLMs' semantic space for ATR via few-shot in-context learning, effectively leveraging its vast pre-existing knowledge without any parameter update. We make our codes publicly available to foster research into LLMs for HRRP ATR.

Lingfeng Chen、Panhe Hu、Zhiliang Pan、Qi Liu、Zhen Liu

军事技术

Lingfeng Chen,Panhe Hu,Zhiliang Pan,Qi Liu,Zhen Liu.Large Language Models Can Achieve Explainable and Training-Free One-shot HRRP ATR[EB/OL].(2025-06-03)[2025-07-02].https://arxiv.org/abs/2506.02465.点此复制

评论