|国家预印本平台
首页|Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Developing lifelong learning agents is crucial for artificial general intelligence. However, deep reinforcement learning (RL) systems often suffer from plasticity loss, where neural networks gradually lose their ability to adapt during training. Despite its significance, this field lacks unified benchmarks and evaluation protocols. We introduce Plasticine, the first open-source framework for benchmarking plasticity optimization in deep RL. Plasticine provides single-file implementations of over 13 mitigation methods, 10 evaluation metrics, and learning scenarios with increasing non-stationarity levels from standard to open-ended environments. This framework enables researchers to systematically quantify plasticity loss, evaluate mitigation strategies, and analyze plasticity dynamics across different contexts. Our documentation, examples, and source code are available at https://github.com/RLE-Foundation/Plasticine.

Dacheng Tao、Mingqi Yuan、Qi Wang、Guozheng Ma、Bo Li、Xin Jin、Yunbo Wang、Xiaokang Yang、Wenjun Zeng

计算技术、计算机技术

Dacheng Tao,Mingqi Yuan,Qi Wang,Guozheng Ma,Bo Li,Xin Jin,Yunbo Wang,Xiaokang Yang,Wenjun Zeng.Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning[EB/OL].(2025-04-24)[2025-06-14].https://arxiv.org/abs/2504.17490.点此复制

评论