|国家预印本平台
首页|Towards Reliable Proof Generation with LLMs: A Neuro-Symbolic Approach

Towards Reliable Proof Generation with LLMs: A Neuro-Symbolic Approach

Towards Reliable Proof Generation with LLMs: A Neuro-Symbolic Approach

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) struggle with formal domains that require rigorous logical deduction and symbolic reasoning, such as mathematical proof generation. We propose a neuro-symbolic approach that combines LLMs' generative strengths with structured components to overcome this challenge. As a proof-of-concept, we focus on geometry problems. Our approach is two-fold: (1) we retrieve analogous problems and use their proofs to guide the LLM, and (2) a formal verifier evaluates the generated proofs and provides feedback, helping the model fix incorrect proofs. We demonstrate that our method significantly improves proof accuracy for OpenAI's o1 model (58%-70% improvement); both analogous problems and the verifier's feedback contribute to these gains. More broadly, shifting to LLMs that generate provably correct conclusions could dramatically improve their reliability, accuracy and consistency, unlocking complex tasks and critical real-world applications that require trustworthiness.

Oren Sultan、Eitan Stern、Dafna Shahaf

数学

Oren Sultan,Eitan Stern,Dafna Shahaf.Towards Reliable Proof Generation with LLMs: A Neuro-Symbolic Approach[EB/OL].(2025-05-20)[2025-06-17].https://arxiv.org/abs/2505.14479.点此复制

评论