|国家预印本平台
首页|A Survey on GUI Agents with Foundation Models Enhanced by Reinforcement Learning

A Survey on GUI Agents with Foundation Models Enhanced by Reinforcement Learning

A Survey on GUI Agents with Foundation Models Enhanced by Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Graphical User Interface (GUI) agents, driven by Multi-modal Large Language Models (MLLMs), have emerged as a promising paradigm for enabling intelligent interaction with digital systems. This paper provides a structured survey of recent advances in GUI agents, focusing on architectures enhanced by Reinforcement Learning (RL). We first formalize GUI agent tasks as Markov Decision Processes and discuss typical execution environments and evaluation metrics. We then review the modular architecture of (M)LLM-based GUI agents, covering Perception, Planning, and Acting modules, and trace their evolution through representative works. Furthermore, we categorize GUI agent training methodologies into Prompt-based, Supervised Fine-Tuning (SFT)-based, and RL-based approaches, highlighting the progression from simple prompt engineering to dynamic policy learning via RL. Our summary illustrates how recent innovations in multimodal perception, decision reasoning, and adaptive action generation have significantly improved the generalization and robustness of GUI agents in complex real-world environments. We conclude by identifying key challenges and future directions for building more capable and reliable GUI agents.

Kaer Huang、Jiahao Li

计算技术、计算机技术

Kaer Huang,Jiahao Li.A Survey on GUI Agents with Foundation Models Enhanced by Reinforcement Learning[EB/OL].(2025-04-29)[2025-06-28].https://arxiv.org/abs/2504.20464.点此复制

评论