LLM-Guided Agentic Object Detection for Open-World Understanding
LLM-Guided Agentic Object Detection for Open-World Understanding
Object detection traditionally relies on fixed category sets, requiring costly re-training to handle novel objects. While Open-World and Open-Vocabulary Object Detection (OWOD and OVOD) improve flexibility, OWOD lacks semantic labels for unknowns, and OVOD depends on user prompts, limiting autonomy. We propose an LLM-guided agentic object detection (LAOD) framework that enables fully label-free, zero-shot detection by prompting a Large Language Model (LLM) to generate scene-specific object names. These are passed to an open-vocabulary detector for localization, allowing the system to adapt its goals dynamically. We introduce two new metrics, Class-Agnostic Average Precision (CAAP) and Semantic Naming Average Precision (SNAP), to separately evaluate localization and naming. Experiments on LVIS, COCO, and COCO-OOD validate our approach, showing strong performance in detecting and naming novel objects. Our method offers enhanced autonomy and adaptability for open-world understanding.
Furkan Mumcu、Michael J. Jones、Anoop Cherian、Yasin Yilmaz
计算技术、计算机技术
Furkan Mumcu,Michael J. Jones,Anoop Cherian,Yasin Yilmaz.LLM-Guided Agentic Object Detection for Open-World Understanding[EB/OL].(2025-07-14)[2025-07-25].https://arxiv.org/abs/2507.10844.点此复制
评论