|国家预印本平台
首页|Grammars of Formal Uncertainty: When to Trust LLMs in Automated Reasoning Tasks

Grammars of Formal Uncertainty: When to Trust LLMs in Automated Reasoning Tasks

Grammars of Formal Uncertainty: When to Trust LLMs in Automated Reasoning Tasks

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) show remarkable promise for democratizing automated reasoning by generating formal specifications. However, a fundamental tension exists: LLMs are probabilistic, while formal verification demands deterministic guarantees. This paper addresses this epistemological gap by comprehensively investigating failure modes and uncertainty quantification (UQ) in LLM-generated formal artifacts. Our systematic evaluation of five frontier LLMs reveals Satisfiability Modulo Theories (SMT) based autoformalization's domain-specific impact on accuracy (from +34.8% on logical tasks to -44.5% on factual ones), with known UQ techniques like the entropy of token probabilities failing to identify these errors. We introduce a probabilistic context-free grammar (PCFG) framework to model LLM outputs, yielding a refined uncertainty taxonomy. We find uncertainty signals are task-dependent (e.g., grammar entropy for logic, AUROC>0.93). Finally, a lightweight fusion of these signals enables selective verification, drastically reducing errors (14-100%) with minimal abstention, transforming LLM-driven formalization into a reliable engineering discipline.

Debargha Ganguly、Vikash Singh、Sreehari Sankar、Biyao Zhang、Xuecen Zhang、Srinivasan Iyengar、Xiaotian Han、Amit Sharma、Shivkumar Kalyanaraman、Vipin Chaudhary

计算技术、计算机技术

Debargha Ganguly,Vikash Singh,Sreehari Sankar,Biyao Zhang,Xuecen Zhang,Srinivasan Iyengar,Xiaotian Han,Amit Sharma,Shivkumar Kalyanaraman,Vipin Chaudhary.Grammars of Formal Uncertainty: When to Trust LLMs in Automated Reasoning Tasks[EB/OL].(2025-05-26)[2025-06-27].https://arxiv.org/abs/2505.20047.点此复制

评论