|国家预印本平台
| 注册
首页|Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Transformers have demonstrated exceptional performance across a wide range of domains. While their ability to perform reinforcement learning in-context has been established both theoretically and empirically, their behavior in non-stationary environments remains less understood. In this study, we address this gap by showing that transformers can achieve nearly optimal dynamic regret bounds in non-stationary settings. We prove that transformers are capable of approximating strategies used to handle non-stationary environments and can learn the approximator in the in-context learning setup. Our experiments further show that transformers can match or even outperform existing expert algorithms in such environments.

Baiyuan Chen、Shinji Ito、Masaaki Imaizumi

计算技术、计算机技术

Baiyuan Chen,Shinji Ito,Masaaki Imaizumi.Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning[EB/OL].(2025-08-22)[2025-09-06].https://arxiv.org/abs/2508.16027.点此复制

评论