Zero-Shot Listwise Document Reranking with a Large Language Model
Zero-Shot Listwise Document Reranking with a Large Language Model
Supervised ranking methods based on bi-encoder or cross-encoder architectures have shown success in multi-stage text ranking tasks, but they require large amounts of relevance judgments as training data. In this work, we propose Listwise Reranker with a Large Language Model (LRL), which achieves strong reranking effectiveness without using any task-specific training data. Different from the existing pointwise ranking methods, where documents are scored independently and ranked according to the scores, LRL directly generates a reordered list of document identifiers given the candidate documents. Experiments on three TREC web search datasets demonstrate that LRL not only outperforms zero-shot pointwise methods when reranking first-stage retrieval results, but can also act as a final-stage reranker to improve the top-ranked results of a pointwise method for improved efficiency. Additionally, we apply our approach to subsets of MIRACL, a recent multilingual retrieval dataset, with results showing its potential to generalize across different languages.
Xinyu Zhang、Jimmy Lin、Ronak Pradeep、Xueguang Ma
计算技术、计算机技术
Xinyu Zhang,Jimmy Lin,Ronak Pradeep,Xueguang Ma.Zero-Shot Listwise Document Reranking with a Large Language Model[EB/OL].(2023-05-03)[2025-06-04].https://arxiv.org/abs/2305.02156.点此复制
评论