When Fine-Tuning Fails: Lessons from MS MARCO Passage Ranking
When Fine-Tuning Fails: Lessons from MS MARCO Passage Ranking
This paper investigates the counterintuitive phenomenon where fine-tuning pre-trained transformer models degrades performance on the MS MARCO passage ranking task. Through comprehensive experiments involving five model variants-including full parameter fine-tuning and parameter efficient LoRA adaptations-we demonstrate that all fine-tuning approaches underperform the base sentence-transformers/all- MiniLM-L6-v2 model (MRR@10: 0.3026). Our analysis reveals that fine-tuning disrupts the optimal embedding space structure learned during the base model's extensive pre-training on 1 billion sentence pairs, including 9.1 million MS MARCO samples. UMAP visualizations show progressive embedding space flattening, while training dynamics analysis and computational efficiency metrics further support our findings. These results challenge conventional wisdom about transfer learning effectiveness on saturated benchmarks and suggest architectural innovations may be necessary for meaningful improvements.
Manu Pande、Shahil Kumar、Anay Yatin Damle
计算技术、计算机技术
Manu Pande,Shahil Kumar,Anay Yatin Damle.When Fine-Tuning Fails: Lessons from MS MARCO Passage Ranking[EB/OL].(2025-06-23)[2025-07-16].https://arxiv.org/abs/2506.18535.点此复制
评论