Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition
Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition
We propose a large language model based reward decomposition framework for aligning dialogue agents using only a single session-level feedback signal. We leverage the reasoning capabilities of a frozen, pretrained large language model (LLM) to infer fine-grained local implicit rewards by decomposing global, session-level feedback. Our first text-only variant prompts the LLM to perform reward decomposition using only the dialogue transcript. The second multimodal variant incorporates additional behavioral cues, such as pitch, gaze, and facial affect, expressed as natural language descriptions. These inferred turn-level rewards are distilled into a lightweight reward model, which we utilize for RL-based fine-tuning for dialogue generation. We evaluate both text-only and multimodal variants against state-of-the-art reward decomposition methods and demonstrate notable improvements in human evaluations of conversation quality, suggesting that LLMs are strong reward decomposers that obviate the need for manual reward shaping and granular human feedback.
Dong Won Lee、Hae Won Park、Cynthia Breazeal、Louis-Philippe Morency
计算技术、计算机技术
Dong Won Lee,Hae Won Park,Cynthia Breazeal,Louis-Philippe Morency.Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition[EB/OL].(2025-05-21)[2025-07-20].https://arxiv.org/abs/2505.15922.点此复制
评论