|国家预印本平台
首页|Acceptance Test Generation with Large Language Models: An Industrial Case Study

Acceptance Test Generation with Large Language Models: An Industrial Case Study

Acceptance Test Generation with Large Language Models: An Industrial Case Study

来源:Arxiv_logoArxiv
英文摘要

Large language model (LLM)-powered assistants are increasingly used for generating program code and unit tests, but their application in acceptance testing remains underexplored. To help address this gap, this paper explores the use of LLMs for generating executable acceptance tests for web applications through a two-step process: (i) generating acceptance test scenarios in natural language (in Gherkin) from user stories, and (ii) converting these scenarios into executable test scripts (in Cypress), knowing the HTML code of the pages under test. This two-step approach supports acceptance test-driven development, enhances tester control, and improves test quality. The two steps were implemented in the AutoUAT and Test Flow tools, respectively, powered by GPT-4 Turbo, and integrated into a partner company's workflow and evaluated on real-world projects. The users found the acceptance test scenarios generated by AutoUAT helpful 95% of the time, even revealing previously overlooked cases. Regarding Test Flow, 92% of the acceptance test cases generated by Test Flow were considered helpful: 60% were usable as generated, 8% required minor fixes, and 24% needed to be regenerated with additional inputs; the remaining 8% were discarded due to major issues. These results suggest that LLMs can,in fact, help improve the acceptance test process with appropriate tooling and supervision.

Margarida Ferreira、Luis Viegas、Joao Pascoal Faria、Bruno Lima

计算技术、计算机技术

Margarida Ferreira,Luis Viegas,Joao Pascoal Faria,Bruno Lima.Acceptance Test Generation with Large Language Models: An Industrial Case Study[EB/OL].(2025-04-09)[2025-05-01].https://arxiv.org/abs/2504.07244.点此复制

评论