|国家预印本平台
首页|Component Based Quantum Machine Learning Explainability

Component Based Quantum Machine Learning Explainability

Component Based Quantum Machine Learning Explainability

来源:Arxiv_logoArxiv
英文摘要

Explainable ML algorithms are designed to provide transparency and insight into their decision-making process. Explaining how ML models come to their prediction is critical in fields such as healthcare and finance, as it provides insight into how models can help detect bias in predictions and help comply with GDPR compliance in these fields. QML leverages quantum phenomena such as entanglement and superposition, offering the potential for computational speedup and greater insights compared to classical ML. However, QML models also inherit the black-box nature of their classical counterparts, requiring the development of explainability techniques to be applied to these QML models to help understand why and how a particular output was generated. This paper will explore the idea of creating a modular, explainable QML framework that splits QML algorithms into their core components, such as feature maps, variational circuits (ansatz), optimizers, kernels, and quantum-classical loops. Each component will be analyzed using explainability techniques, such as ALE and SHAP, which have been adapted to analyse the different components of these QML algorithms. By combining insights from these parts, the paper aims to infer explainability to the overall QML model.

Krishnendu Guha、Barra White

计算技术、计算机技术

Krishnendu Guha,Barra White.Component Based Quantum Machine Learning Explainability[EB/OL].(2025-06-14)[2025-07-19].https://arxiv.org/abs/2506.12378.点此复制

评论