Minifinetuning: Low-Data Generation Domain Adaptation through Corrective Self-Distillation
Minifinetuning: Low-Data Generation Domain Adaptation through Corrective Self-Distillation
Finetuning language models for a new domain inevitably leads to the deterioration of their general performance. This becomes more pronounced the more limited the finetuning data resource. We introduce minifinetuning (MFT), a method for language model domain adaptation that considerably reduces the effects of overfitting-induced degeneralization in low-data settings and which does so in the absence of any pre-training data for replay. MFT demonstrates 2-10x more favourable specialization-to-degeneralization ratios than standard finetuning across a wide range of models and domains and exhibits an intrinsic robustness to overfitting when data in the new domain is scarce and down to as little as 500 samples. Employing corrective self-distillation that is individualized on the sample level, MFT outperforms parameter-efficient finetuning methods, demonstrates replay-like degeneralization mitigation properties, and is composable with either for a combined effect.
Peter Belcak、Greg Heinrich、Jan Kautz、Pavlo Molchanov
计算技术、计算机技术
Peter Belcak,Greg Heinrich,Jan Kautz,Pavlo Molchanov.Minifinetuning: Low-Data Generation Domain Adaptation through Corrective Self-Distillation[EB/OL].(2025-05-30)[2025-07-16].https://arxiv.org/abs/2506.15702.点此复制
评论