FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions
FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions
Preference-based reinforcement learning (PbRL) is a suitable approach for style adaptation of pre-trained robotic behavior: adapting the robot's policy to follow human user preferences while still being able to perform the original task. However, collecting preferences for the adaptation process in robotics is often challenging and time-consuming. In this work we explore the adaptation of pre-trained robots in the low-preference-data regime. We show that, in this regime, recent adaptation approaches suffer from catastrophic reward forgetting (CRF), where the updated reward model overfits to the new preferences, leading the agent to become unable to perform the original task. To mitigate CRF, we propose to enhance the original reward model with a small number of parameters (low-rank matrices) responsible for modeling the preference adaptation. Our evaluation shows that our method can efficiently and effectively adjust robotic behavior to human preferences across simulation benchmark tasks and multiple real-world robotic tasks.
Daniel Marta、Simon Holk、Miguel Vasco、Jens Lundell、Timon Homberger、Finn Busch、Olov Andersson、Danica Kragic、Iolanda Leite
自动化技术、自动化技术设备
Daniel Marta,Simon Holk,Miguel Vasco,Jens Lundell,Timon Homberger,Finn Busch,Olov Andersson,Danica Kragic,Iolanda Leite.FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions[EB/OL].(2025-04-14)[2025-04-30].https://arxiv.org/abs/2504.10002.点此复制
评论