Leveraging Reference Documents for Zero-Shot Ranking via Large Language Models
Leveraging Reference Documents for Zero-Shot Ranking via Large Language Models
Large Language Models (LLMs) have demonstrated exceptional performance in the task of text ranking for information retrieval. While Pointwise ranking approaches offer computational efficiency by scoring documents independently, they often yield biased relevance estimates due to the lack of inter-document comparisons. In contrast, Pairwise methods improve ranking accuracy by explicitly comparing document pairs, but suffer from substantial computational overhead with quadratic complexity ($O(n^2)$). To address this tradeoff, we propose \textbf{RefRank}, a simple and effective comparative ranking method based on a fixed reference document. Instead of comparing all document pairs, RefRank prompts the LLM to evaluate each candidate relative to a shared reference anchor. By selecting the reference anchor that encapsulates the core query intent, RefRank implicitly captures relevance cues, enabling indirect comparison between documents via this common anchor. This reduces computational cost to linear time ($O(n)$) while importantly, preserving the advantages of comparative evaluation. To further enhance robustness, we aggregate multiple RefRank outputs using a weighted averaging scheme across different reference choices. Experiments on several benchmark datasets and with various LLMs show that RefRank significantly outperforms Pointwise baselines and could achieve performance at least on par with Pairwise approaches with a significantly lower computational cost.
Jieran Li、Xiuyuan Hu、Yang Zhao、Shengyao Zhuang、Hao Zhang
计算技术、计算机技术
Jieran Li,Xiuyuan Hu,Yang Zhao,Shengyao Zhuang,Hao Zhang.Leveraging Reference Documents for Zero-Shot Ranking via Large Language Models[EB/OL].(2025-06-13)[2025-06-22].https://arxiv.org/abs/2506.11452.点此复制
评论