ToDi: Token-wise Distillation via Fine-Grained Divergence Control
ToDi: Token-wise Distillation via Fine-Grained Divergence Control
Large language models (LLMs) offer impressive performance but are impractical for resource-constrained deployment due to high latency and energy consumption. Knowledge distillation (KD) addresses this by transferring knowledge from a large teacher to a smaller student model. However, conventional KD, notably approaches like Forward KL (FKL) and Reverse KL (RKL), apply uniform divergence loss across the entire vocabulary, neglecting token-level prediction discrepancies. By investigating these representative divergences via gradient analysis, we reveal that FKL boosts underestimated tokens, while RKL suppresses overestimated ones, showing their complementary roles. Based on this observation, we propose Token-wise Distillation (ToDi), a novel method that adaptively combines FKL and RKL per token using a sigmoid-based weighting function derived from the teacher-student probability log-ratio. ToDi dynamically emphasizes the appropriate divergence for each token, enabling precise distribution alignment. We demonstrate that ToDi consistently outperforms recent distillation baselines using uniform or less granular strategies across instruction-following benchmarks. Extensive ablation studies and efficiency analysis further validate ToDi's effectiveness and practicality.
Seongryong Jung、Suwan Yoon、DongGeon Kim、Hwanhee Lee
计算技术、计算机技术
Seongryong Jung,Suwan Yoon,DongGeon Kim,Hwanhee Lee.ToDi: Token-wise Distillation via Fine-Grained Divergence Control[EB/OL].(2025-05-22)[2025-07-25].https://arxiv.org/abs/2505.16297.点此复制
评论