|国家预印本平台
首页|Hallucination Detection in LLMs via Topological Divergence on Attention Graphs

Hallucination Detection in LLMs via Topological Divergence on Attention Graphs

Hallucination Detection in LLMs via Topological Divergence on Attention Graphs

来源:Arxiv_logoArxiv
英文摘要

Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models (LLMs). We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting, which leverages a topological divergence metric to quantify the structural properties of graphs induced by attention matrices. Examining the topological divergence between prompt and response subgraphs reveals consistent patterns: higher divergence values in specific attention heads correlate with hallucinated outputs, independent of the dataset. Extensive experiments, including evaluation on question answering and data-to-text tasks, show that our approach achieves state-of-the-art or competitive results on several benchmarks, two of which were annotated by us and are being publicly released to facilitate further research. Beyond its strong in-domain performance, TOHA maintains remarkable domain transferability across multiple open-source LLMs. Our findings suggest that analyzing the topological structure of attention matrices can serve as an efficient and robust indicator of factual reliability in LLMs.

Alexandra Bazarova、Aleksandr Yugay、Andrey Shulga、Alina Ermilova、Andrei Volodichev、Konstantin Polev、Julia Belikova、Rauf Parchiev、Dmitry Simakov、Maxim Savchenko、Andrey Savchenko、Serguei Barannikov、Alexey Zaytsev

计算技术、计算机技术

Alexandra Bazarova,Aleksandr Yugay,Andrey Shulga,Alina Ermilova,Andrei Volodichev,Konstantin Polev,Julia Belikova,Rauf Parchiev,Dmitry Simakov,Maxim Savchenko,Andrey Savchenko,Serguei Barannikov,Alexey Zaytsev.Hallucination Detection in LLMs via Topological Divergence on Attention Graphs[EB/OL].(2025-04-14)[2025-04-28].https://arxiv.org/abs/2504.10063.点此复制

评论