|国家预印本平台
首页|Hierarchical Attention Generates Better Proofs

Hierarchical Attention Generates Better Proofs

Hierarchical Attention Generates Better Proofs

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have shown promise in formal theorem proving, but their token-level processing often fails to capture the inherent hierarchical nature of mathematical proofs. We introduce \textbf{Hierarchical Attention}, a regularization method that aligns LLMs' attention mechanisms with mathematical reasoning structures. Our approach establishes a five-level hierarchy from foundational elements to high-level concepts, ensuring structured information flow in proof generation. Experiments demonstrate that our method improves proof success rates by 2.05\% on miniF2F and 1.69\% on ProofNet while reducing proof complexity by 23.81\% and 16.50\% respectively. The code is available at https://github.com/Car-pe/HAGBP.

Jianlong Chen、Chao Li、Yang Yuan、Andrew C Yao

计算技术、计算机技术

Jianlong Chen,Chao Li,Yang Yuan,Andrew C Yao.Hierarchical Attention Generates Better Proofs[EB/OL].(2025-04-27)[2025-06-29].https://arxiv.org/abs/2504.19188.点此复制

评论