|国家预印本平台
首页|Disentangling Doubt in Deep Causal AI

Disentangling Doubt in Deep Causal AI

Disentangling Doubt in Deep Causal AI

来源:Arxiv_logoArxiv
英文摘要

Accurate individual treatment-effect estimation in high-stakes applications demands both reliable point predictions and interpretable uncertainty quantification. We propose a factorized Monte Carlo Dropout framework for deep twin-network models that splits total predictive variance into representation uncertainty (sigma_rep) in the shared encoder and prediction uncertainty (sigma_pred) in the outcome heads. Across three synthetic covariate-shift regimes, our intervals are well-calibrated (ECE < 0.03) and satisfy sigma_rep^2 + sigma_pred^2 ~ sigma_tot^2. Additionally, we observe a crossover: head uncertainty leads on in-distribution data, but representation uncertainty dominates under shift. Finally, on a real-world twins cohort with induced multivariate shifts, only sigma_rep spikes on out-of-distribution samples (delta sigma ~ 0.0002) and becomes the primary error predictor (rho_rep <= 0.89), while sigma_pred remains flat. This module-level decomposition offers a practical diagnostic for detecting and interpreting uncertainty sources in deep causal-effect models.

Cooper Doyle

计算技术、计算机技术

Cooper Doyle.Disentangling Doubt in Deep Causal AI[EB/OL].(2025-07-04)[2025-07-18].https://arxiv.org/abs/2507.03622.点此复制

评论