|国家预印本平台
首页|Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations

Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations

Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations

来源:Arxiv_logoArxiv
英文摘要

The need for explanations in AI has, by and large, been driven by the desire to increase the transparency of black-box machine learning models. However, such explanations, which focus on the internal mechanisms that lead to a specific output, are often unsuitable for non-experts. To facilitate a human-centered perspective on AI explanations, agents need to focus on individuals and their preferences as well as the context in which the explanations are given. This paper proposes a personalized approach to explanation, where the agent tailors the information provided to the user based on what is most likely pertinent to them. We propose a model of the agent's worldview that also serves as a personal and dynamic memory of its previous interactions with the same user, based on which the artificial agent can estimate what part of its knowledge is most likely new information to the user.

Laura Spillner、Nima Zargham、Mihai Pomarlan、Robert Porzel、Rainer Malaka

计算技术、计算机技术

Laura Spillner,Nima Zargham,Mihai Pomarlan,Robert Porzel,Rainer Malaka.Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations[EB/OL].(2025-07-29)[2025-08-11].https://arxiv.org/abs/2507.21571.点此复制

评论