Do ATCOs Need Explanations, and Why? Towards ATCO-Centered Explainable AI for Conflict Resolution Advisories
Do ATCOs Need Explanations, and Why? Towards ATCO-Centered Explainable AI for Conflict Resolution Advisories
Interest in explainable artificial intelligence (XAI) is surging. Prior research has primarily focused on systems' ability to generate explanations, often guided by researchers' intuitions rather than end-users' needs. Unfortunately, such approaches have not yielded favorable outcomes when compared to a black-box baseline (i.e., no explanation). To address this gap, this paper advocates a human-centered approach that shifts focus to air traffic controllers (ATCOs) by asking a fundamental yet overlooked question: Do ATCOs need explanations, and if so, why? Insights from air traffic management (ATM), human-computer interaction, and the social sciences were synthesized to provide a holistic understanding of XAI challenges and opportunities in ATM. Evaluating 11 ATM operational goals revealed a clear need for explanations when ATCOs aim to document decisions and rationales for future reference or report generation. Conversely, ATCOs are less likely to seek them when their conflict resolution approach align with the artificial intelligence (AI) advisory. While this is a preliminary study, the findings are expected to inspire broader and deeper inquiries into the design of ATCO-centric XAI systems, paving the way for more effective human-AI interaction in ATM.
Katherine Fennedy、Brian Hilburn、Thaivalappil N. M. Nadirsha、Sameer Alam、Khanh-Duy Le、Hua Li
综合运输自动化技术、自动化技术设备
Katherine Fennedy,Brian Hilburn,Thaivalappil N. M. Nadirsha,Sameer Alam,Khanh-Duy Le,Hua Li.Do ATCOs Need Explanations, and Why? Towards ATCO-Centered Explainable AI for Conflict Resolution Advisories[EB/OL].(2025-05-05)[2025-05-25].https://arxiv.org/abs/2505.03117.点此复制
评论