Generating Dialogues from Egocentric Instructional Videos for Task Assistance: Dataset, Method and Benchmark
Generating Dialogues from Egocentric Instructional Videos for Task Assistance: Dataset, Method and Benchmark
Many everyday tasks ranging from fixing appliances, cooking recipes to car maintenance require expert knowledge, especially when tasks are complex and multi-step. Despite growing interest in AI agents, there is a scarcity of dialogue-video datasets grounded for real world task assistance. In this paper, we propose a simple yet effective approach that transforms single-person instructional videos into task-guidance two-person dialogues, aligned with fine grained steps and video-clips. Our fully automatic approach, powered by large language models, offers an efficient alternative to the substantial cost and effort required for human-assisted data collection. Using this technique, we build HowToDIV, a large-scale dataset containing 507 conversations, 6636 question-answer pairs and 24 hours of videoclips across diverse tasks in cooking, mechanics, and planting. Each session includes multi-turn conversation where an expert teaches a novice user how to perform a task step by step, while observing user's surrounding through a camera and microphone equipped wearable device. We establish the baseline benchmark performance on HowToDIV dataset through Gemma-3 model for future research on this new task of dialogues for procedural-task assistance.
Lavisha Aggarwal、Vikas Bahirwani、Lin Li、Andrea Colaco
计算技术、计算机技术自动化技术、自动化技术设备
Lavisha Aggarwal,Vikas Bahirwani,Lin Li,Andrea Colaco.Generating Dialogues from Egocentric Instructional Videos for Task Assistance: Dataset, Method and Benchmark[EB/OL].(2025-08-15)[2025-08-28].https://arxiv.org/abs/2508.11192.点此复制
评论