|国家预印本平台
首页|Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition

Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition

Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition

来源:Arxiv_logoArxiv
英文摘要

Continual learning for reinforcement learning agents remains a significant challenge, particularly in preserving and leveraging existing information without an external signal to indicate changes in tasks or environments. In this study, we explore the effectiveness of autoencoders in detecting new tasks and matching observed environments to previously encountered ones. Our approach integrates policy optimization with familiarity autoencoders within an end-to-end continual learning system. This system can recognize and learn new tasks or environments while preserving knowledge from earlier experiences and can selectively retrieve relevant knowledge when re-encountering a known environment. Initial results demonstrate successful continual learning without external signals to indicate task changes or reencounters, showing promise for this methodology.

Zeki Doruk Erden、Donia Gasmi、Boi Faltings

自动化基础理论

Zeki Doruk Erden,Donia Gasmi,Boi Faltings.Continual Reinforcement Learning via Autoencoder-Driven Task and New Environment Recognition[EB/OL].(2025-05-13)[2025-06-03].https://arxiv.org/abs/2505.09003.点此复制

评论