|国家预印本平台
首页|A Two-armed Bandit Framework for A/B Testing

A Two-armed Bandit Framework for A/B Testing

A Two-armed Bandit Framework for A/B Testing

来源:Arxiv_logoArxiv
英文摘要

A/B testing is widely used in modern technology companies for policy evaluation and product deployment, with the goal of comparing the outcomes under a newly-developed policy against a standard control. Various causal inference and reinforcement learning methods developed in the literature are applicable to A/B testing. This paper introduces a two-armed bandit framework designed to improve the power of existing approaches. The proposed procedure consists of three main steps: (i) employing doubly robust estimation to generate pseudo-outcomes, (ii) utilizing a two-armed bandit framework to construct the test statistic, and (iii) applying a permutation-based method to compute the $p$-value. We demonstrate the efficacy of the proposed method through asymptotic theories, numerical experiments and real-world data from a ridesharing company, showing its superior performance in comparison to existing methods.

Jinjuan Wang、Qianglin Wen、Yu Zhang、Xiaodong Yan、Chengchun Shi

计算技术、计算机技术

Jinjuan Wang,Qianglin Wen,Yu Zhang,Xiaodong Yan,Chengchun Shi.A Two-armed Bandit Framework for A/B Testing[EB/OL].(2025-07-24)[2025-08-18].https://arxiv.org/abs/2507.18118.点此复制

评论