|国家预印本平台
首页|Flow-Based Policy for Online Reinforcement Learning

Flow-Based Policy for Online Reinforcement Learning

Flow-Based Policy for Online Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

We present \textbf{FlowRL}, a novel framework for online reinforcement learning that integrates flow-based policy representation with Wasserstein-2-regularized optimization. We argue that in addition to training signals, enhancing the expressiveness of the policy class is crucial for the performance gains in RL. Flow-based generative models offer such potential, excelling at capturing complex, multimodal action distributions. However, their direct application in online RL is challenging due to a fundamental objective mismatch: standard flow training optimizes for static data imitation, while RL requires value-based policy optimization through a dynamic buffer, leading to difficult optimization landscapes. FlowRL first models policies via a state-dependent velocity field, generating actions through deterministic ODE integration from noise. We derive a constrained policy search objective that jointly maximizes Q through the flow policy while bounding the Wasserstein-2 distance to a behavior-optimal policy implicitly derived from the replay buffer. This formulation effectively aligns the flow optimization with the RL objective, enabling efficient and value-aware policy learning despite the complexity of the policy class. Empirical evaluations on DMControl and Humanoidbench demonstrate that FlowRL achieves competitive performance in online reinforcement learning benchmarks.

Lei Lv、Yunfei Li、Yu Luo、Fuchun Sun、Tao Kong、Jiafeng Xu、Xiao Ma

计算技术、计算机技术

Lei Lv,Yunfei Li,Yu Luo,Fuchun Sun,Tao Kong,Jiafeng Xu,Xiao Ma.Flow-Based Policy for Online Reinforcement Learning[EB/OL].(2025-06-15)[2025-06-23].https://arxiv.org/abs/2506.12811.点此复制

评论