|国家预印本平台
首页|Causality-Driven Audits of Model Robustness

Causality-Driven Audits of Model Robustness

Causality-Driven Audits of Model Robustness

来源:Arxiv_logoArxiv
英文摘要

Robustness audits of deep neural networks (DNN) provide a means to uncover model sensitivities to the challenging real-world imaging conditions that significantly degrade DNN performance in-the-wild. Such conditions are often the result of multiple interacting factors inherent to the environment, sensor, or processing pipeline and may lead to complex image distortions that are not easily categorized. When robustness audits are limited to a set of isolated imaging effects or distortions, the results cannot be (easily) transferred to real-world conditions where image corruptions may be more complex or nuanced. To address this challenge, we present a new alternative robustness auditing method that uses causal inference to measure DNN sensitivities to the factors of the imaging process that cause complex distortions. Our approach uses causal models to explicitly encode assumptions about the domain-relevant factors and their interactions. Then, through extensive experiments on natural and rendered images across multiple vision tasks, we show that our approach reliably estimates causal effects of each factor on DNN performance using only observational domain data. These causal effects directly tie DNN sensitivities to observable properties of the imaging pipeline in the domain of interest towards reducing the risk of unexpected DNN failures when deployed in that domain.

Nathan Drenkow、William Paul、Chris Ribaudo、Mathias Unberath

计算技术、计算机技术

Nathan Drenkow,William Paul,Chris Ribaudo,Mathias Unberath.Causality-Driven Audits of Model Robustness[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2410.23494.点此复制

评论