|国家预印本平台
首页|Multi-Armed Bandits With Machine Learning-Generated Surrogate Rewards

Multi-Armed Bandits With Machine Learning-Generated Surrogate Rewards

Multi-Armed Bandits With Machine Learning-Generated Surrogate Rewards

来源:Arxiv_logoArxiv
英文摘要

Multi-armed bandit (MAB) is a widely adopted framework for sequential decision-making under uncertainty. Traditional bandit algorithms rely solely on online data, which tends to be scarce as it must be gathered during the online phase when the arms are actively pulled. However, in many practical settings, rich auxiliary data, such as covariates of past users, is available prior to deploying any arms. We introduce a new setting for MAB where pre-trained machine learning (ML) models are applied to convert side information and historical data into \emph{surrogate rewards}. A prominent feature of this setting is that the surrogate rewards may exhibit substantial bias, as true reward data is typically unavailable in the offline phase, forcing ML predictions to heavily rely on extrapolation. To address the issue, we propose the Machine Learning-Assisted Upper Confidence Bound (MLA-UCB) algorithm, which can be applied to any reward prediction model and any form of auxiliary data. When the predicted and true rewards are jointly Gaussian, it provably improves the cumulative regret, provided that the correlation is non-zero -- even in cases where the mean surrogate reward completely misaligns with the true mean rewards. Notably, our method requires no prior knowledge of the covariance matrix between true and surrogate rewards. We compare MLA-UCB with the standard UCB on a range of numerical studies and show a sizable efficiency gain even when the size of the offline data and the correlation between predicted and true rewards are moderate.

Wenlong Ji、Yihan Pan、Ruihao Zhu、Lihua Lei

计算技术、计算机技术

Wenlong Ji,Yihan Pan,Ruihao Zhu,Lihua Lei.Multi-Armed Bandits With Machine Learning-Generated Surrogate Rewards[EB/OL].(2025-06-20)[2025-07-01].https://arxiv.org/abs/2506.16658.点此复制

评论