Testing the Untestable? An Empirical Study on the Testing Process of LLM-Powered Software Systems
Testing the Untestable? An Empirical Study on the Testing Process of LLM-Powered Software Systems
Background: Software systems powered by large language models are becoming a routine part of everyday technologies, supporting applications across a wide range of domains. In software engineering, many studies have focused on how LLMs support tasks such as code generation, debugging, and documentation. However, there has been limited focus on how full systems that integrate LLMs are tested during development. Aims: This study explores how LLM-powered systems are tested in the context of real-world application development. Method: We conducted an exploratory case study using 99 individual reports written by students who built and deployed LLM-powered applications as part of a university course. Each report was independently analyzed using thematic analysis, supported by a structured coding process. Results: Testing strategies combined manual and automated methods to evaluate both system logic and model behavior. Common practices included exploratory testing, unit testing, and prompt iteration. Reported challenges included integration failures, unpredictable outputs, prompt sensitivity, hallucinations, and uncertainty about correctness. Conclusions: Testing LLM-powered systems required adaptations to traditional verification methods, blending source-level reasoning with behavior-aware evaluations. These findings provide evidence on the practical context of testing generative components in software systems.
Cleyton Magalhaes、Italo Santos、Brody Stuart-Verner、Ronnie de Souza Santos
计算技术、计算机技术
Cleyton Magalhaes,Italo Santos,Brody Stuart-Verner,Ronnie de Souza Santos.Testing the Untestable? An Empirical Study on the Testing Process of LLM-Powered Software Systems[EB/OL].(2025-08-04)[2025-08-11].https://arxiv.org/abs/2508.00198.点此复制
评论