|国家预印本平台
首页|Policy Learning with $\alpha$-Expected Welfare

Policy Learning with $\alpha$-Expected Welfare

Policy Learning with $\alpha$-Expected Welfare

来源:Arxiv_logoArxiv
英文摘要

This paper proposes an optimal policy that targets the average welfare of the worst-off $\alpha$-fraction of the post-treatment outcome distribution. We refer to this policy as the $\alpha$-Expected Welfare Maximization ($\alpha$-EWM) rule, where $\alpha \in (0,1]$ denotes the size of the subpopulation of interest. The $\alpha$-EWM rule interpolates between the expected welfare ($\alpha=1$) and the Rawlsian welfare ($\alpha\rightarrow 0$). For $\alpha\in (0,1)$, an $\alpha$-EWM rule can be interpreted as a distributionally robust EWM rule that allows the target population to have a different distribution than the study population. Using the dual formulation of our $\alpha$-expected welfare function, we propose a debiased estimator for the optimal policy and establish its asymptotic upper regret bounds. In addition, we develop asymptotically valid inference for the optimal welfare based on the proposed debiased estimator. We examine the finite sample performance of the debiased estimator and inference via both real and synthetic data.

Yanqin Fan、Yuan Qi、Gaoqian Xu

经济计划、经济管理

Yanqin Fan,Yuan Qi,Gaoqian Xu.Policy Learning with $\alpha$-Expected Welfare[EB/OL].(2025-04-30)[2025-06-23].https://arxiv.org/abs/2505.00256.点此复制

评论