|国家预印本平台
首页|Online Knowledge Distillation with Reward Guidance

Online Knowledge Distillation with Reward Guidance

Online Knowledge Distillation with Reward Guidance

来源:Arxiv_logoArxiv
英文摘要

This work studies knowledge distillation (KD) for large language models (LLMs) through preference optimization. We propose a reward-guided imitation learning framework for sequential KD, formulating a min-max optimization problem between the policy and reward model (RM) to minimize the performance gap between the student and teacher policies. Specifically, the reward optimization is constrained to achieve near-optimality within a confidence set for preference alignment. For preference data construction, we explore both offline and online preference-based KD. Additionally, we reformulate the RM using the $Q$-value function and extend the framework to white-box KD, where the teacher policy's predicted probabilities are accessible. Theoretical analysis and empirical results demonstrate the effectiveness of the proposed framework.

Chen Jia

计算技术、计算机技术

Chen Jia.Online Knowledge Distillation with Reward Guidance[EB/OL].(2025-05-24)[2025-06-24].https://arxiv.org/abs/2505.18952.点此复制

评论