|国家预印本平台
首页|ProRank: Prompt Warmup via Reinforcement Learning for Small Language Models Reranking

ProRank: Prompt Warmup via Reinforcement Learning for Small Language Models Reranking

ProRank: Prompt Warmup via Reinforcement Learning for Small Language Models Reranking

来源:Arxiv_logoArxiv
英文摘要

Reranking is fundamental to information retrieval and retrieval-augmented generation, with recent Large Language Models (LLMs) significantly advancing reranking quality. While recent advances with LLMs have significantly improved document reranking quality, current approaches primarily rely on large-scale LLMs (>7B parameters) through zero-shot prompting, presenting high computational costs. Small Language Models (SLMs) offer a promising alternative because of their efficiency, but our preliminary quantitative analysis reveals they struggle with understanding task prompts without fine-tuning. This limits their effectiveness for document reranking tasks. To address this issue, we introduce a novel two-stage training approach, ProRank, for SLM-based document reranking. First, we propose a prompt warmup stage using reinforcement learning GRPO to steer SLMs to understand task prompts and generate more accurate coarse-grained binary relevance scores for document reranking. Then, we continuously fine-tune the SLMs with a fine-grained score learning stage without introducing additional layers to further improve the reranking quality. Comprehensive experimental results demonstrate that the proposed ProRank consistently outperforms both the most advanced open-source and proprietary reranking models. Notably, our lightweight ProRank-0.5B model even surpasses the powerful 32B LLM reranking model on the BEIR benchmark, establishing that properly trained SLMs can achieve superior document reranking performance while maintaining computational efficiency.

Aamir Shakir、Rui Huang、Julius Lipp、Jing Li、Xianming Li

计算技术、计算机技术

Aamir Shakir,Rui Huang,Julius Lipp,Jing Li,Xianming Li.ProRank: Prompt Warmup via Reinforcement Learning for Small Language Models Reranking[EB/OL].(2025-06-03)[2025-07-23].https://arxiv.org/abs/2506.03487.点此复制

评论