|国家预印本平台
首页|Dynamic Robot Tool Use with Vision Language Models

Dynamic Robot Tool Use with Vision Language Models

Dynamic Robot Tool Use with Vision Language Models

来源:Arxiv_logoArxiv
英文摘要

Tool use enhances a robot's task capabilities. Recent advances in vision-language models (VLMs) have equipped robots with sophisticated cognitive capabilities for tool-use applications. However, existing methodologies focus on elementary quasi-static tool manipulations or high-level tool selection while neglecting the critical aspect of task-appropriate tool grasping. To address this limitation, we introduce inverse Tool-Use Planning (iTUP), a novel VLM-driven framework that enables grounded fine-grained planning for versatile robotic tool use. Through an integrated pipeline of VLM-based tool and contact point grounding, position-velocity trajectory planning, and physics-informed grasp generation and selection, iTUP demonstrates versatility across (1) quasi-static and more challenging (2) dynamic and (3) cluster tool-use tasks. To ensure robust planning, our framework integrates stable and safe task-aware grasping by reasoning over semantic affordances and physical constraints. We evaluate iTUP and baselines on a comprehensive range of realistic tool use tasks including precision hammering, object scooping, and cluster sweeping. Experimental results demonstrate that iTUP ensures a thorough grounding of cognition and planning for challenging robot tool use across diverse environments.

Noah Trupin、Zixing Wang、Ahmed H. Qureshi

自动化技术、自动化技术设备计算技术、计算机技术

Noah Trupin,Zixing Wang,Ahmed H. Qureshi.Dynamic Robot Tool Use with Vision Language Models[EB/OL].(2025-05-02)[2025-06-29].https://arxiv.org/abs/2505.01399.点此复制

评论