Mitigating Hidden Confounding by Progressive Confounder Imputation via Large Language Models
Mitigating Hidden Confounding by Progressive Confounder Imputation via Large Language Models
Hidden confounding remains a central challenge in estimating treatment effects from observational data, as unobserved variables can lead to biased causal estimates. While recent work has explored the use of large language models (LLMs) for causal inference, most approaches still rely on the unconfoundedness assumption. In this paper, we make the first attempt to mitigate hidden confounding using LLMs. We propose ProCI (Progressive Confounder Imputation), a framework that elicits the semantic and world knowledge of LLMs to iteratively generate, impute, and validate hidden confounders. ProCI leverages two key capabilities of LLMs: their strong semantic reasoning ability, which enables the discovery of plausible confounders from both structured and unstructured inputs, and their embedded world knowledge, which supports counterfactual reasoning under latent confounding. To improve robustness, ProCI adopts a distributional reasoning strategy instead of direct value imputation to prevent the collapsed outputs. Extensive experiments demonstrate that ProCI uncovers meaningful confounders and significantly improves treatment effect estimation across various datasets and LLMs.
Hao Yang、Haoxuan Li、Luyu Chen、Haoxiang Wang、Xu Chen、Mingming Gong
计算技术、计算机技术
Hao Yang,Haoxuan Li,Luyu Chen,Haoxiang Wang,Xu Chen,Mingming Gong.Mitigating Hidden Confounding by Progressive Confounder Imputation via Large Language Models[EB/OL].(2025-06-26)[2025-07-20].https://arxiv.org/abs/2507.02928.点此复制
评论