|国家预印本平台
首页|Reinforcement Learning with Continuous Actions Under Unmeasured Confounding

Reinforcement Learning with Continuous Actions Under Unmeasured Confounding

Reinforcement Learning with Continuous Actions Under Unmeasured Confounding

来源:Arxiv_logoArxiv
英文摘要

This paper addresses the challenge of offline policy learning in reinforcement learning with continuous action spaces when unmeasured confounders are present. While most existing research focuses on policy evaluation within partially observable Markov decision processes (POMDPs) and assumes discrete action spaces, we advance this field by establishing a novel identification result to enable the nonparametric estimation of policy value for a given target policy under an infinite-horizon framework. Leveraging this identification, we develop a minimax estimator and introduce a policy-gradient-based algorithm to identify the in-class optimal policy that maximizes the estimated policy value. Furthermore, we provide theoretical results regarding the consistency, finite-sample error bound, and regret bound of the resulting optimal policy. Extensive simulations and a real-world application using the German Family Panel data demonstrate the effectiveness of our proposed methodology.

Yuhan Li、Eugene Han、Yifan Hu、Wenzhuo Zhou、Zhengling Qi、Yifan Cui、Ruoqing Zhu

自动化基础理论计算技术、计算机技术

Yuhan Li,Eugene Han,Yifan Hu,Wenzhuo Zhou,Zhengling Qi,Yifan Cui,Ruoqing Zhu.Reinforcement Learning with Continuous Actions Under Unmeasured Confounding[EB/OL].(2025-05-01)[2025-06-04].https://arxiv.org/abs/2505.00304.点此复制

评论