|国家预印本平台
首页|Multi-Level Explanations for Generative Language Models

Multi-Level Explanations for Generative Language Models

Multi-Level Explanations for Generative Language Models

来源:Arxiv_logoArxiv
英文摘要

Despite the increasing use of large language models (LLMs) for context-grounded tasks like summarization and question-answering, understanding what makes an LLM produce a certain response is challenging. We propose Multi-Level Explanations for Generative Language Models (MExGen), a technique to provide explanations for context-grounded text generation. MExGen assigns scores to parts of the context to quantify their influence on the model's output. It extends attribution methods like LIME and SHAP to LLMs used in context-grounded tasks where (1) inference cost is high, (2) input text is long, and (3) the output is text. We conduct a systematic evaluation, both automated and human, of perturbation-based attribution methods for summarization and question answering. The results show that our framework can provide more faithful explanations of generated output than available alternatives, including LLM self-explanations. We open-source code for MExGen as part of the ICX360 toolkit: https://github$.$com/IBM/ICX360.

Lucas Monteiro Paes、Dennis Wei、Hyo Jin Do、Hendrik Strobelt、Ronny Luss、Amit Dhurandhar、Manish Nagireddy、Karthikeyan Natesan Ramamurthy、Prasanna Sattigeri、Werner Geyer、Soumya Ghosh

计算技术、计算机技术

Lucas Monteiro Paes,Dennis Wei,Hyo Jin Do,Hendrik Strobelt,Ronny Luss,Amit Dhurandhar,Manish Nagireddy,Karthikeyan Natesan Ramamurthy,Prasanna Sattigeri,Werner Geyer,Soumya Ghosh.Multi-Level Explanations for Generative Language Models[EB/OL].(2025-07-23)[2025-08-05].https://arxiv.org/abs/2403.14459.点此复制

评论