|国家预印本平台
首页|Stochastic Multi-Objective Multi-Armed Bandits: Regret Definition and Algorithm

Stochastic Multi-Objective Multi-Armed Bandits: Regret Definition and Algorithm

Stochastic Multi-Objective Multi-Armed Bandits: Regret Definition and Algorithm

来源:Arxiv_logoArxiv
英文摘要

Multi-armed bandit (MAB) problems are widely applied to online optimization tasks that require balancing exploration and exploitation. In practical scenarios, these tasks often involve multiple conflicting objectives, giving rise to multi-objective multi-armed bandits (MO-MAB). Existing MO-MAB approaches predominantly rely on the Pareto regret metric introduced in \cite{drugan2013designing}. However, this metric has notable limitations, particularly in accounting for all Pareto-optimal arms simultaneously. To address these challenges, we propose a novel and comprehensive regret metric that ensures balanced performance across conflicting objectives. Additionally, we introduce the concept of \textit{Efficient Pareto-Optimal} arms, which are specifically designed for online optimization. Based on our new metric, we develop a two-phase MO-MAB algorithm that achieves sublinear regret for both Pareto-optimal and efficient Pareto-optimal arms.

Mansoor Davoodi、Setareh Maghsudi

计算技术、计算机技术

Mansoor Davoodi,Setareh Maghsudi.Stochastic Multi-Objective Multi-Armed Bandits: Regret Definition and Algorithm[EB/OL].(2025-06-16)[2025-06-29].https://arxiv.org/abs/2506.13125.点此复制

评论