|国家预印本平台
首页|Augmented Relevance Datasets with Fine-Tuned Small LLMs

Augmented Relevance Datasets with Fine-Tuned Small LLMs

Augmented Relevance Datasets with Fine-Tuned Small LLMs

来源:Arxiv_logoArxiv
英文摘要

Building high-quality datasets and labeling query-document relevance are essential yet resource-intensive tasks, requiring detailed guidelines and substantial effort from human annotators. This paper explores the use of small, fine-tuned large language models (LLMs) to automate relevance assessment, with a focus on improving ranking models' performance by augmenting their training dataset. We fine-tuned small LLMs to enhance relevance assessments, thereby improving dataset creation quality for downstream ranking model training. Our experiments demonstrate that these fine-tuned small LLMs not only outperform certain closed source models on our dataset but also lead to substantial improvements in ranking model performance. These results highlight the potential of leveraging small LLMs for efficient and scalable dataset augmentation, providing a practical solution for search engine optimization.

Quentin Fitte-Rey、Matyas Amrouche、Romain Deveaud

计算技术、计算机技术

Quentin Fitte-Rey,Matyas Amrouche,Romain Deveaud.Augmented Relevance Datasets with Fine-Tuned Small LLMs[EB/OL].(2025-04-13)[2025-05-01].https://arxiv.org/abs/2504.09816.点此复制

评论