|国家预印本平台
首页|ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World

ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World

ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World

来源:Arxiv_logoArxiv
英文摘要

The rapid progress of large language models (LLMs) has sparked growing interest in building Artificial General Intelligence (AGI) within Graphical User Interface (GUI) environments. However, existing GUI agents based on LLMs or vision-language models (VLMs) often fail to generalize to novel environments and rely heavily on manually curated, diverse datasets. To overcome these limitations, we introduce ScreenExplorer, a VLM trained via Group Relative Policy Optimization(GRPO) in real, dynamic, and open-ended GUI environments. Innovatively, we introduced a world-model-based curiosity reward function to help the agent overcome the cold-start phase of exploration. Additionally, distilling experience streams further enhances the model's exploration capabilities. Our training framework enhances model exploration in open GUI environments, with trained models showing better environmental adaptation and sustained exploration compared to static deployment models. Our findings offer a scalable pathway toward AGI systems with self-improving capabilities in complex interactive settings.

Runliang Niu、Jinglong Ji、Yi Chang、Qi Wang

计算技术、计算机技术

Runliang Niu,Jinglong Ji,Yi Chang,Qi Wang.ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World[EB/OL].(2025-05-25)[2025-06-28].https://arxiv.org/abs/2505.19095.点此复制

评论