AgentA/B: Automated and Scalable Web A/BTesting with Interactive LLM Agents
AgentA/B: Automated and Scalable Web A/BTesting with Interactive LLM Agents
A/B testing experiment is a widely adopted method for evaluating UI/UX design decisions in modern web applications. Yet, traditional A/B testing remains constrained by its dependence on the large-scale and live traffic of human participants, and the long time of waiting for the testing result. Through formative interviews with six experienced industry practitioners, we identified critical bottlenecks in current A/B testing workflows. In response, we present AgentA/B, a novel system that leverages Large Language Model-based autonomous agents (LLM Agents) to automatically simulate user interaction behaviors with real webpages. AgentA/B enables scalable deployment of LLM agents with diverse personas, each capable of navigating the dynamic webpage and interactively executing multi-step interactions like search, clicking, filtering, and purchasing. In a demonstrative controlled experiment, we employ AgentA/B to simulate a between-subject A/B testing with 1,000 LLM agents Amazon.com, and compare agent behaviors with real human shopping behaviors at a scale. Our findings suggest AgentA/B can emulate human-like behavior patterns.
Dakuo Wang、Ting-Yao Hsu、Yuxuan Lu、Hansu Gu、Limeng Cui、Yaochen Xie、William Headean、Bingsheng Yao、Akash Veeragouni、Jiapeng Liu、Sreyashi Nag、Jessie Wang
自动化技术、自动化技术设备计算技术、计算机技术
Dakuo Wang,Ting-Yao Hsu,Yuxuan Lu,Hansu Gu,Limeng Cui,Yaochen Xie,William Headean,Bingsheng Yao,Akash Veeragouni,Jiapeng Liu,Sreyashi Nag,Jessie Wang.AgentA/B: Automated and Scalable Web A/BTesting with Interactive LLM Agents[EB/OL].(2025-04-13)[2025-05-10].https://arxiv.org/abs/2504.09723.点此复制
评论