Supporting Software Formal Verification with Large Language Models: An Experimental Study
Supporting Software Formal Verification with Large Language Models: An Experimental Study
Formal methods have been employed for requirements verification for a long time. However, it is difficult to automatically derive properties from natural language requirements. SpecVerify addresses this challenge by integrating large language models (LLMs) with formal verification tools, providing a more flexible mechanism for expressing requirements. This framework combines Claude 3.5 Sonnet with the ESBMC verifier to form an automated workflow. Evaluated on nine cyber-physical systems from Lockheed Martin, SpecVerify achieves 46.5% verification accuracy, comparable to NASA's CoCoSim, but with lower false positives. Our framework formulates assertions that extend beyond the expressive power of LTL and identifies falsifiable cases that are missed by more traditional methods. Counterexample analysis reveals CoCoSim's limitations stemming from model connection errors and numerical approximation issues. While SpecVerify advances verification automation, our comparative study of Claude, ChatGPT, and Llama shows that high-quality requirements documentation and human monitoring remain critical, as models occasionally misinterpret specifications. Our results demonstrate that LLMs can significantly reduce the barriers to formal verification, while highlighting the continued importance of human-machine collaboration in achieving optimal results.
Weiqi Wang、Marie Farrell、Lucas C. Cordeiro、Liping Zhao
计算技术、计算机技术
Weiqi Wang,Marie Farrell,Lucas C. Cordeiro,Liping Zhao.Supporting Software Formal Verification with Large Language Models: An Experimental Study[EB/OL].(2025-07-07)[2025-07-21].https://arxiv.org/abs/2507.04857.点此复制
评论