Doubly robust inference via calibration
Doubly robust inference via calibration
Doubly robust estimators are widely used for estimating average treatment effects and other linear summaries of regression functions. While consistency requires only one of two nuisance functions to be estimated consistently, asymptotic normality typically require sufficiently fast convergence of both. In this work, we correct this mismatch: we show that calibrating the nuisance estimators within a doubly robust procedure yields doubly robust asymptotic normality for linear functionals. We introduce a general framework, calibrated debiased machine learning (calibrated DML), and propose a specific estimator that augments standard DML with a simple isotonic regression adjustment. Our theoretical analysis shows that the calibrated DML estimator remains asymptotically normal if either the regression or the Riesz representer of the functional is estimated sufficiently well, allowing the other to converge arbitrarily slowly or even inconsistently. We further propose a simple bootstrap method for constructing confidence intervals, enabling doubly robust inference without additional nuisance estimation. In a range of semi-synthetic benchmark datasets, calibrated DML reduces bias and improves coverage relative to standard DML. Our method can be integrated into existing DML pipelines by adding just a few lines of code to calibrate cross-fitted estimates via isotonic regression.
Marco Carone、Lars van der Laan、Alex Luedtke
计算技术、计算机技术
Marco Carone,Lars van der Laan,Alex Luedtke.Doubly robust inference via calibration[EB/OL].(2025-06-27)[2025-07-19].https://arxiv.org/abs/2411.02771.点此复制
评论