|国家预印本平台
首页|Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing

Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing

Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing

来源:Arxiv_logoArxiv
英文摘要

We consider off-policy selection and learning in contextual bandits, where the learner aims to select or train a reward-maximizing policy using data collected by a fixed behavior policy. Our contribution is two-fold. First, we propose a novel off-policy selection method that leverages a new betting-based confidence bound applied to an inverse propensity weight sequence. Our theoretical analysis reveals that this method achieves a significantly improved, variance-adaptive guarantee over prior work. Second, we propose a novel and generic condition on the optimization objective for off-policy learning that strikes a different balance between bias and variance. One special case, which we call freezing, tends to induce low variance, which is preferred in small-data regimes. Our analysis shows that it matches the best existing guarantees. In our empirical study, our selection method outperforms existing methods, and freezing exhibits improved performance in small-sample regimes.

J. Jon Ryu、Jeongyeol Kwon、Benjamin Koppe、Kwang-Sung Jun

计算技术、计算机技术

J. Jon Ryu,Jeongyeol Kwon,Benjamin Koppe,Kwang-Sung Jun.Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing[EB/OL].(2025-07-14)[2025-08-02].https://arxiv.org/abs/2502.10826.点此复制

评论