|国家预印本平台
首页|Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability

Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability

Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability

来源:Arxiv_logoArxiv
英文摘要

Non-stationary online learning has drawn much attention in recent years. Despite considerable progress, dynamic regret minimization has primarily focused on convex functions, leaving the functions with stronger curvature (e.g., squared or logistic loss) underexplored. In this work, we address this gap by showing that the regret can be substantially improved by leveraging the concept of mixability, a property that generalizes exp-concavity to effectively capture loss curvature. Let $d$ denote the dimensionality and $P_T$ the path length of comparators that reflects the environmental non-stationarity. We demonstrate that an exponential-weight method with fixed-share updates achieves an $\mathcal{O}(d T^{1/3} P_T^{2/3} \log T)$ dynamic regret for mixable losses, improving upon the best-known $\mathcal{O}(d^{10/3} T^{1/3} P_T^{2/3} \log T)$ result (Baby and Wang, 2021) in $d$. More importantly, this improvement arises from a simple yet powerful analytical framework that exploits the mixability, which avoids the Karush-Kuhn-Tucker-based analysis required by existing work.

Yu-Jie Zhang、Peng Zhao、Masashi Sugiyama

计算技术、计算机技术

Yu-Jie Zhang,Peng Zhao,Masashi Sugiyama.Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability[EB/OL].(2025-06-12)[2025-07-02].https://arxiv.org/abs/2506.10616.点此复制

评论