|国家预印本平台
首页|R-VLM: Region-Aware Vision Language Model for Precise GUI Grounding

R-VLM: Region-Aware Vision Language Model for Precise GUI Grounding

R-VLM: Region-Aware Vision Language Model for Precise GUI Grounding

来源:Arxiv_logoArxiv
英文摘要

Visual agent models for automating human activities on Graphical User Interfaces (GUIs) have emerged as a promising research direction, driven by advances in large Vision Language Models (VLMs). A critical challenge in GUI automation is the precise grounding of interface elements across diverse platforms. Existing vision-only GUI agents directly ground elements from large and cluttered screenshots, requiring them to process substantial irrelevant information that compromises their accuracy. In addition, these approaches typically employ basic cross-entropy loss for learning grounding objectives, which fails to effectively capture grounding quality compared to established object detection metrics like Intersection-over-Union (IoU). To address these issues, we introduce R-VLM, a novel GUI grounding approach that leverages zoomed-in region proposals for precise element localization. We also propose an IoU-aware objective function that facilitates model convergence toward high IoU predictions. Our approach bridges the gap between VLMs and conventional object detection techniques, improving the state-of-the-art grounding accuracy by 13% across diverse GUI platforms on the GUI grounding benchmarks ScreenSpot and AgentStudio. In addition, our R-VLM approach shows 3.2-9.7% absolute accuracy improvements in GUI navigation tasks on the AITW and Mind2Web benchmarks.

Joonhyung Park、Peng Tang、Sagnik Das、Srikar Appalaraju、Kunwar Yashraj Singh、R. Manmatha、Shabnam Ghadar

计算技术、计算机技术

Joonhyung Park,Peng Tang,Sagnik Das,Srikar Appalaraju,Kunwar Yashraj Singh,R. Manmatha,Shabnam Ghadar.R-VLM: Region-Aware Vision Language Model for Precise GUI Grounding[EB/OL].(2025-07-08)[2025-07-23].https://arxiv.org/abs/2507.05673.点此复制

评论