|国家预印本平台
首页|Regret Bounds for Robust Online Decision Making

Regret Bounds for Robust Online Decision Making

Regret Bounds for Robust Online Decision Making

来源:Arxiv_logoArxiv
英文摘要

We propose a framework which generalizes "decision making with structured observations" by allowing robust (i.e. multivalued) models. In this framework, each model associates each decision with a convex set of probability distributions over outcomes. Nature can choose distributions out of this set in an arbitrary (adversarial) manner, that can be nonoblivious and depend on past history. The resulting framework offers much greater generality than classical bandits and reinforcement learning, since the realizability assumption becomes much weaker and more realistic. We then derive a theory of regret bounds for this framework. Although our lower and upper bounds are not tight, they are sufficient to fully characterize power-law learnability. We demonstrate this theory in two special cases: robust linear bandits and tabular robust online reinforcement learning. In both cases, we derive regret bounds that improve state-of-the-art (except that we do not address computational efficiency).

Alexander Appel、Vanessa Kosoy

计算技术、计算机技术

Alexander Appel,Vanessa Kosoy.Regret Bounds for Robust Online Decision Making[EB/OL].(2025-04-09)[2025-05-05].https://arxiv.org/abs/2504.06820.点此复制

评论