|国家预印本平台
首页|Explainable AI in Spatial Analysis

Explainable AI in Spatial Analysis

Explainable AI in Spatial Analysis

来源:Arxiv_logoArxiv
英文摘要

This chapter discusses the opportunities of eXplainable Artificial Intelligence (XAI) within the realm of spatial analysis. A key objective in spatial analysis is to model spatial relationships and infer spatial processes to generate knowledge from spatial data, which has been largely based on spatial statistical methods. More recently, machine learning offers scalable and flexible approaches that complement traditional methods and has been increasingly applied in spatial data science. Despite its advantages, machine learning is often criticized for being a black box, which limits our understanding of model behavior and output. Recognizing this limitation, XAI has emerged as a pivotal field in AI that provides methods to explain the output of machine learning models to enhance transparency and understanding. These methods are crucial for model diagnosis, bias detection, and ensuring the reliability of results obtained from machine learning models. This chapter introduces key concepts and methods in XAI with a focus on Shapley value-based approaches, which is arguably the most popular XAI method, and their integration with spatial analysis. An empirical example of county-level voting behaviors in the 2020 Presidential election is presented to demonstrate the use of Shapley values and spatial analysis with a comparison to multi-scale geographically weighted regression. The chapter concludes with a discussion on the challenges and limitations of current XAI techniques and proposes new directions.

Ziqi Li

计算技术、计算机技术自动化技术、自动化技术设备遥感技术

Ziqi Li.Explainable AI in Spatial Analysis[EB/OL].(2025-05-01)[2025-05-19].https://arxiv.org/abs/2505.00591.点此复制

评论