|国家预印本平台
首页|Critique-Guided Distillation: Improving Supervised Fine-tuning via Better Distillation

Critique-Guided Distillation: Improving Supervised Fine-tuning via Better Distillation

Critique-Guided Distillation: Improving Supervised Fine-tuning via Better Distillation

来源:Arxiv_logoArxiv
英文摘要

Supervised fine-tuning (SFT) using expert demonstrations often suffer from the imitation problem, where the model learns to reproduce the correct responses without understanding the underlying rationale. To address this limitation, we propose Critique-Guided Distillation (CGD), a novel multi-stage framework that integrates teacher model generated explanatory critiques and refined responses into the SFT process. A student model is then trained to map the triplet of prompt, teacher critique, and its own initial response to the corresponding refined teacher response, thereby learning both what to imitate and why. Using entropy-based analysis, we show that CGD reduces refinement uncertainty and can be interpreted as a Bayesian posterior update. We perform extensive empirical evaluation of CGD, on variety of benchmark tasks, and demonstrate significant gains on both math (AMC23 +17.5%) and language understanding tasks (MMLU-Pro +6.3%), while successfully mitigating the format drift issues observed in previous critique fine-tuning (CFT) techniques.

Berkcan Kapusuzoglu、Supriyo Chakraborty、Chia-Hsuan Lee、Sambit Sahu

计算技术、计算机技术

Berkcan Kapusuzoglu,Supriyo Chakraborty,Chia-Hsuan Lee,Sambit Sahu.Critique-Guided Distillation: Improving Supervised Fine-tuning via Better Distillation[EB/OL].(2025-05-16)[2025-06-13].https://arxiv.org/abs/2505.11628.点此复制

评论