|国家预印本平台
首页|Beyond Explainability: The Case for AI Validation

Beyond Explainability: The Case for AI Validation

Beyond Explainability: The Case for AI Validation

来源:Arxiv_logoArxiv
英文摘要

Artificial Knowledge (AK) systems are transforming decision-making across critical domains such as healthcare, finance, and criminal justice. However, their growing opacity presents governance challenges that current regulatory approaches, focused predominantly on explainability, fail to address adequately. This article argues for a shift toward validation as a central regulatory pillar. Validation, ensuring the reliability, consistency, and robustness of AI outputs, offers a more practical, scalable, and risk-sensitive alternative to explainability, particularly in high-stakes contexts where interpretability may be technically or economically unfeasible. We introduce a typology based on two axes, validity and explainability, classifying AK systems into four categories and exposing the trade-offs between interpretability and output reliability. Drawing on comparative analysis of regulatory approaches in the EU, US, UK, and China, we show how validation can enhance societal trust, fairness, and safety even where explainability is limited. We propose a forward-looking policy framework centered on pre- and post-deployment validation, third-party auditing, harmonized standards, and liability incentives. This framework balances innovation with accountability and provides a governance roadmap for responsibly integrating opaque, high-performing AK systems into society.

Dalit Ken-Dror Feldman、Daniel Benoliel

法律

Dalit Ken-Dror Feldman,Daniel Benoliel.Beyond Explainability: The Case for AI Validation[EB/OL].(2025-05-27)[2025-06-22].https://arxiv.org/abs/2505.21570.点此复制

评论