Finite time analysis of temporal difference learning with linear function approximation: Tail averaging and regularisation
Finite time analysis of temporal difference learning with linear function approximation: Tail averaging and regularisation
We study the finite-time behaviour of the popular temporal difference (TD) learning algorithm when combined with tail-averaging. We derive finite time bounds on the parameter error of the tail-averaged TD iterate under a step-size choice that does not require information about the eigenvalues of the matrix underlying the projected TD fixed point. Our analysis shows that tail-averaged TD converges at the optimal $O\left(1/t\right)$ rate, both in expectation and with high probability. In addition, our bounds exhibit a sharper rate of decay for the initial error (bias), which is an improvement over averaging all iterates. We also propose and analyse a variant of TD that incorporates regularisation. From analysis, we conclude that the regularised version of TD is useful for problems with ill-conditioned features.
Gandharv Patil、Doina Precup、Prashanth L. A.、Dheeraj Nagaraj
计算技术、计算机技术
Gandharv Patil,Doina Precup,Prashanth L. A.,Dheeraj Nagaraj.Finite time analysis of temporal difference learning with linear function approximation: Tail averaging and regularisation[EB/OL].(2022-10-12)[2025-08-02].https://arxiv.org/abs/2210.05918.点此复制
评论