Can Large Language Models Help Students Prove Software Correctness? An Experimental Study with Dafny
Can Large Language Models Help Students Prove Software Correctness? An Experimental Study with Dafny
Students in computing education increasingly use large language models (LLMs) such as ChatGPT. Yet, the role of LLMs in supporting cognitively demanding tasks, like deductive program verification, remains poorly understood. This paper investigates how students interact with an LLM when solving formal verification exercises in Dafny, a language that supports functional correctness, by allowing programmers to write formal specifications and automatically verifying that the implementation satisfies the specification. We conducted a mixed-methods study with master's students enrolled in a formal methods course. Each participant completed two verification problems, one with access to a custom ChatGPT interface that logged all interactions, and the other without. We identified strategies used by successful students and assessed the level of trust students place in LLMs. Our findings show that students perform significantly better when using ChatGPT; however, performance gains are tied to prompt quality. We conclude with practical recommendations for integrating LLMs into formal methods courses more effectively, including designing LLM-aware challenges that promote learning rather than substitution.
Carolina Carreira、Álvaro Silva、Alexandre Abreu、Alexandra Mendes
教育计算技术、计算机技术
Carolina Carreira,Álvaro Silva,Alexandre Abreu,Alexandra Mendes.Can Large Language Models Help Students Prove Software Correctness? An Experimental Study with Dafny[EB/OL].(2025-07-11)[2025-07-16].https://arxiv.org/abs/2506.22370.点此复制
评论