|国家预印本平台
首页|Exploring the Limits of Model Compression in LLMs: A Knowledge Distillation Study on QA Tasks

Exploring the Limits of Model Compression in LLMs: A Knowledge Distillation Study on QA Tasks

Exploring the Limits of Model Compression in LLMs: A Knowledge Distillation Study on QA Tasks

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have demonstrated outstanding performance across a range of NLP tasks, however, their computational demands hinder their deployment in real-world, resource-constrained environments. This work investigates the extent to which LLMs can be compressed using Knowledge Distillation (KD) while maintaining strong performance on Question Answering (QA) tasks. We evaluate student models distilled from the Pythia and Qwen2.5 families on two QA benchmarks, SQuAD and MLQA, under zero-shot and one-shot prompting conditions. Results show that student models retain over 90% of their teacher models' performance while reducing parameter counts by up to 57.1%. Furthermore, one-shot prompting yields additional performance gains over zero-shot setups for both model families. These findings underscore the trade-off between model efficiency and task performance, demonstrating that KD, combined with minimal prompting, can yield compact yet capable QA systems suitable for resource-constrained applications.

Joyeeta Datta、Niclas Doll、Qusai Ramadan、Zeyd Boukhers

计算技术、计算机技术

Joyeeta Datta,Niclas Doll,Qusai Ramadan,Zeyd Boukhers.Exploring the Limits of Model Compression in LLMs: A Knowledge Distillation Study on QA Tasks[EB/OL].(2025-07-10)[2025-07-21].https://arxiv.org/abs/2507.07630.点此复制

评论