|国家预印本平台
首页|Improved Regret and Contextual Linear Extension for Pandora's Box and Prophet Inequality

Improved Regret and Contextual Linear Extension for Pandora's Box and Prophet Inequality

Improved Regret and Contextual Linear Extension for Pandora's Box and Prophet Inequality

来源:Arxiv_logoArxiv
英文摘要

We study the Pandora's Box problem in an online learning setting with semi-bandit feedback. In each round, the learner sequentially pays to open up to $n$ boxes with unknown reward distributions, observes rewards upon opening, and decides when to stop. The utility of the learner is the maximum observed reward minus the cumulative cost of opened boxes, and the goal is to minimize regret defined as the gap between the cumulative expected utility and that of the optimal policy. We propose a new algorithm that achieves $\widetilde{O}(\sqrt{nT})$ regret after $T$ rounds, which improves the $\widetilde{O}(n\sqrt{T})$ bound of Agarwal et al. [2024] and matches the known lower bound up to logarithmic factors. To better capture real-life applications, we then extend our results to a natural but challenging contextual linear setting, where each box's expected reward is linear in some known but time-varying $d$-dimensional context and the noise distribution is fixed over time. We design an algorithm that learns both the linear function and the noise distributions, achieving $\widetilde{O}(nd\sqrt{T})$ regret. Finally, we show that our techniques also apply to the online Prophet Inequality problem, where the learner must decide immediately whether or not to accept a revealed reward. In both non-contextual and contextual settings, our approach achieves similar improvements and regret bounds.

Junyan Liu、Ziyun Chen、Kun Wang、Haipeng Luo、Lillian J. Ratliff

计算技术、计算机技术

Junyan Liu,Ziyun Chen,Kun Wang,Haipeng Luo,Lillian J. Ratliff.Improved Regret and Contextual Linear Extension for Pandora's Box and Prophet Inequality[EB/OL].(2025-05-24)[2025-06-30].https://arxiv.org/abs/2505.18828.点此复制

评论