LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household Robotics
LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household Robotics
We present an embodied robotic system with an LLM-driven agent-orchestration architecture for autonomous household object management. The system integrates memory-augmented task planning, enabling robots to execute high-level user commands while tracking past actions. It employs three specialized agents: a routing agent, a task planning agent, and a knowledge base agent, each powered by task-specific LLMs. By leveraging in-context learning, our system avoids the need for explicit model training. RAG enables the system to retrieve context from past interactions, enhancing long-term object tracking. A combination of Grounded SAM and LLaMa3.2-Vision provides robust object detection, facilitating semantic scene understanding for task planning. Evaluation across three household scenarios demonstrates high task planning accuracy and an improvement in memory recall due to RAG. Specifically, Qwen2.5 yields best performance for specialized agents, while LLaMA3.1 excels in routing tasks. The source code is available at: https://github.com/marc1198/chat-hsr.
Marc Glocker、Peter H?nig、Matthias Hirschmanner、Markus Vincze
自动化技术、自动化技术设备计算技术、计算机技术
Marc Glocker,Peter H?nig,Matthias Hirschmanner,Markus Vincze.LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household Robotics[EB/OL].(2025-04-30)[2025-06-22].https://arxiv.org/abs/2504.21716.点此复制
评论