|国家预印本平台
首页|Towards Multimodal Social Conversations with Robots: Using Vision-Language Models

Towards Multimodal Social Conversations with Robots: Using Vision-Language Models

Towards Multimodal Social Conversations with Robots: Using Vision-Language Models

来源:Arxiv_logoArxiv
英文摘要

Large language models have given social robots the ability to autonomously engage in open-domain conversations. However, they are still missing a fundamental social skill: making use of the multiple modalities that carry social interactions. While previous work has focused on task-oriented interactions that require referencing the environment or specific phenomena in social interactions such as dialogue breakdowns, we outline the overall needs of a multimodal system for social conversations with robots. We then argue that vision-language models are able to process this wide range of visual information in a sufficiently general manner for autonomous social robots. We describe how to adapt them to this setting, which technical challenges remain, and briefly discuss evaluation practices.

Ruben Janssens、Tony Belpaeme

计算技术、计算机技术自动化技术、自动化技术设备

Ruben Janssens,Tony Belpaeme.Towards Multimodal Social Conversations with Robots: Using Vision-Language Models[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.19196.点此复制

评论