|国家预印本平台
首页|PLEX: Perturbation-free Local Explanations for LLM-Based Text Classification

PLEX: Perturbation-free Local Explanations for LLM-Based Text Classification

PLEX: Perturbation-free Local Explanations for LLM-Based Text Classification

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) excel in text classification, but their complexity hinders interpretability, making it difficult to understand the reasoning behind their predictions. Explainable AI (XAI) methods like LIME and SHAP offer local explanations by identifying influential words, but they rely on computationally expensive perturbations. These methods typically generate thousands of perturbed sentences and perform inferences on each, incurring a substantial computational burden, especially with LLMs. To address this, we propose \underline{P}erturbation-free \underline{L}ocal \underline{Ex}planation (PLEX), a novel method that leverages the contextual embeddings extracted from the LLM and a ``Siamese network" style neural network trained to align with feature importance scores. This one-off training eliminates the need for subsequent perturbations, enabling efficient explanations for any new sentence. We demonstrate PLEX's effectiveness on four different classification tasks (sentiment, fake news, fake COVID-19 news and depression), showing more than 92\% agreement with LIME and SHAP. Our evaluation using a ``stress test" reveals that PLEX accurately identifies influential words, leading to a similar decline in classification accuracy as observed with LIME and SHAP when these words are removed. Notably, in some cases, PLEX demonstrates superior performance in capturing the impact of key features. PLEX dramatically accelerates explanation, reducing time and computational overhead by two and four orders of magnitude, respectively. This work offers a promising solution for explainable LLM-based text classification.

Yogachandran Rahulamathavan、Misbah Farooq、Varuna De Silva

计算技术、计算机技术

Yogachandran Rahulamathavan,Misbah Farooq,Varuna De Silva.PLEX: Perturbation-free Local Explanations for LLM-Based Text Classification[EB/OL].(2025-07-12)[2025-07-25].https://arxiv.org/abs/2507.10596.点此复制

评论