|国家预印本平台
首页|CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition

CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition

CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition

来源:Arxiv_logoArxiv
英文摘要

We introduce CARMA, a system for situational grounding in human-robot group interactions. Effective collaboration in such group settings requires situational awareness based on a consistent representation of present persons and objects coupled with an episodic abstraction of events regarding actors and manipulated objects. This calls for a clear and consistent assignment of instances, ensuring that robots correctly recognize and track actors, objects, and their interactions over time. To achieve this, CARMA uniquely identifies physical instances of such entities in the real world and organizes them into grounded triplets of actors, objects, and actions. To validate our approach, we conducted three experiments, where multiple humans and a robot interact: collaborative pouring, handovers, and sorting. These scenarios allow the assessment of the system's capabilities as to role distinction, multi-actor awareness, and consistent instance identification. Our experiments demonstrate that the system can reliably generate accurate actor-action-object triplets, providing a structured and robust foundation for applications requiring spatiotemporal reasoning and situated decision-making in collaborative settings.

Joerg Deigmoeller、Stephan Hasler、Nakul Agarwal、Daniel Tanneberg、Anna Belardinelli、Reza Ghoddoosian、Chao Wang、Felix Ocker、Fan Zhang、Behzad Dariush、Michael Gienger

计算技术、计算机技术

Joerg Deigmoeller,Stephan Hasler,Nakul Agarwal,Daniel Tanneberg,Anna Belardinelli,Reza Ghoddoosian,Chao Wang,Felix Ocker,Fan Zhang,Behzad Dariush,Michael Gienger.CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition[EB/OL].(2025-06-25)[2025-07-16].https://arxiv.org/abs/2506.20373.点此复制

评论