|国家预印本平台
首页|Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval

Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval

Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval

来源:Arxiv_logoArxiv
英文摘要

Reflecting the greater significance of recent history over the distant past in non-stationary environments, $\lambda$-discounted regret has been introduced in online convex optimization (OCO) to gracefully forget past data as new information arrives. When the discount factor $\lambda$ is given, online gradient descent with an appropriate step size achieves an $O(1/\sqrt{1-\lambda})$ discounted regret. However, the value of $\lambda$ is often not predetermined in real-world scenarios. This gives rise to a significant open question: is it possible to develop a discounted algorithm that adapts to an unknown discount factor. In this paper, we affirmatively answer this question by providing a novel analysis to demonstrate that smoothed OGD (SOGD) achieves a uniform $O(\sqrt{\log T/1-\lambda})$ discounted regret, holding for all values of $\lambda$ across a continuous interval simultaneously. The basic idea is to maintain multiple OGD instances to handle different discount factors, and aggregate their outputs sequentially by an online prediction algorithm named as Discounted-Normal-Predictor (DNP) (Kapralov and Panigrahy,2010). Our analysis reveals that DNP can combine the decisions of two experts, even when they operate on discounted regret with different discount factors.

Wenhao Yang、Sifan Yang、Lijun Zhang

计算技术、计算机技术

Wenhao Yang,Sifan Yang,Lijun Zhang.Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval[EB/OL].(2025-05-26)[2025-07-02].https://arxiv.org/abs/2505.19491.点此复制

评论