|国家预印本平台
首页|Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations

Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations

Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations

来源:Arxiv_logoArxiv
英文摘要

Explanations for computer vision models are important tools for interpreting how the underlying models work. However, they are often presented in static formats, which pose challenges for users, including information overload, a gap between semantic and pixel-level information, and limited opportunities for exploration. We investigate interactivity as a mechanism for tackling these issues in three common explanation types: heatmap-based, concept-based, and prototype-based explanations. We conducted a study (N=24), using a bird identification task, involving participants with diverse technical and domain expertise. We found that while interactivity enhances user control, facilitates rapid convergence to relevant information, and allows users to expand their understanding of the model and explanation, it also introduces new challenges. To address these, we provide design recommendations for interactive computer vision explanations, including carefully selected default views, independent input controls, and constrained output spaces.

Indu Panigrahi、Sunnie S. Y. Kim、Amna Liaqat、Rohan Jinturkar、Olga Russakovsky、Ruth Fong、Parastoo Abtahi

计算技术、计算机技术

Indu Panigrahi,Sunnie S. Y. Kim,Amna Liaqat,Rohan Jinturkar,Olga Russakovsky,Ruth Fong,Parastoo Abtahi.Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations[EB/OL].(2025-04-14)[2025-05-28].https://arxiv.org/abs/2504.10745.点此复制

评论