Action-Adaptive Continual Learning: Enabling Policy Generalization under Dynamic Action Spaces
Action-Adaptive Continual Learning: Enabling Policy Generalization under Dynamic Action Spaces
Continual Learning (CL) is a powerful tool that enables agents to learn a sequence of tasks, accumulating knowledge learned in the past and using it for problem-solving or future task learning. However, existing CL methods often assume that the agent's capabilities remain static within dynamic environments, which doesn't reflect real-world scenarios where capabilities dynamically change. This paper introduces a new and realistic problem: Continual Learning with Dynamic Capabilities (CL-DC), posing a significant challenge for CL agents: How can policy generalization across different action spaces be achieved? Inspired by the cortical functions, we propose an Action-Adaptive Continual Learning framework (AACL) to address this challenge. Our framework decouples the agent's policy from the specific action space by building an action representation space. For a new action space, the encoder-decoder of action representations is adaptively fine-tuned to maintain a balance between stability and plasticity. Furthermore, we release a benchmark based on three environments to validate the effectiveness of methods for CL-DC. Experimental results demonstrate that our framework outperforms popular methods by generalizing the policy across action spaces.
Chaofan Pan、Jiafen Liu、Yanhua Li、Linbo Xiong、Fan Min、Wei Wei、Xin Yang
计算技术、计算机技术
Chaofan Pan,Jiafen Liu,Yanhua Li,Linbo Xiong,Fan Min,Wei Wei,Xin Yang.Action-Adaptive Continual Learning: Enabling Policy Generalization under Dynamic Action Spaces[EB/OL].(2025-06-05)[2025-06-21].https://arxiv.org/abs/2506.05702.点此复制
评论