Guided Reality: Generating Visually-Enriched AR Task Guidance with LLMs and Vision Models
Guided Reality: Generating Visually-Enriched AR Task Guidance with LLMs and Vision Models
Large language models (LLMs) have enabled the automatic generation of step-by-step augmented reality (AR) instructions for a wide range of physical tasks. However, existing LLM-based AR guidance often lacks rich visual augmentations to effectively embed instructions into spatial context for a better user understanding. We present Guided Reality, a fully automated AR system that generates embedded and dynamic visual guidance based on step-by-step instructions. Our system integrates LLMs and vision models to: 1) generate multi-step instructions from user queries, 2) identify appropriate types of visual guidance, 3) extract spatial information about key interaction points in the real world, and 4) embed visual guidance in physical space to support task execution. Drawing from a corpus of user manuals, we define five categories of visual guidance and propose an identification strategy based on the current step. We evaluate the system through a user study (N=16), completing real-world tasks and exploring the system in the wild. Additionally, four instructors shared insights on how Guided Reality could be integrated into their training workflows.
Ada Yi Zhao、Aditya Gunturu、Ellen Yi-Luen Do、Ryo Suzuki
计算技术、计算机技术自动化技术、自动化技术设备
Ada Yi Zhao,Aditya Gunturu,Ellen Yi-Luen Do,Ryo Suzuki.Guided Reality: Generating Visually-Enriched AR Task Guidance with LLMs and Vision Models[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2508.03547.点此复制
评论