|国家预印本平台
| 注册
首页|Score as Action: Fine-Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning

Score as Action: Fine-Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning

Score as Action: Fine-Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Reinforcement learning from human feedback (RLHF), which aligns a diffusion model with input prompt, has become a crucial step in building reliable generative AI models. Most works in this area use a discrete-time formulation, which is prone to induced discretization errors, and often not applicable to models with higher-order/black-box solvers. The objective of this study is to develop a disciplined approach to fine-tune diffusion models using continuous-time RL, formulated as a stochastic control problem with a reward function that aligns the end result (terminal state) with input prompt. The key idea is to treat score matching as controls or actions, and thereby making connections to policy optimization and regularization in continuous-time RL. To carry out this idea, we lay out a new policy optimization framework for continuous-time RL, and illustrate its potential in enhancing the value networks design space via leveraging the structural property of diffusion models. We validate the advantages of our method by experiments in downstream tasks of fine-tuning large-scale Text2Image models of Stable Diffusion v1.5.

Ji Zhang、David D. Yao、Wenpin Tang、Hanyang Zhao、Haoxian Chen

计算技术、计算机技术

Ji Zhang,David D. Yao,Wenpin Tang,Hanyang Zhao,Haoxian Chen.Score as Action: Fine-Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning[EB/OL].(2025-08-21)[2025-09-06].https://arxiv.org/abs/2502.01819.点此复制

评论