Read the Docs Before Rewriting: Equip Rewriter with Domain Knowledge via Continual Pre-training
Read the Docs Before Rewriting: Equip Rewriter with Domain Knowledge via Continual Pre-training
A Retrieval-Augmented Generation (RAG)-based question-answering (QA) system enhances a large language model's knowledge by retrieving relevant documents based on user queries. Discrepancies between user queries and document phrasings often necessitate query rewriting. However, in specialized domains, the rewriter model may struggle due to limited domain-specific knowledge. To resolve this, we propose the R\&R (Read the doc before Rewriting) rewriter, which involves continual pre-training on professional documents, akin to how students prepare for open-book exams by reviewing textbooks. Additionally, it can be combined with supervised fine-tuning for improved results. Experiments on multiple datasets demonstrate that R\&R excels in professional QA across multiple domains, effectively bridging the query-document gap, while maintaining good performance in general scenarios, thus advancing the application of RAG-based QA systems in specialized fields.
Qi Wang、Yixuan Cao、Yifan Liu、Jiangtao Zhao、Ping Luo
计算技术、计算机技术
Qi Wang,Yixuan Cao,Yifan Liu,Jiangtao Zhao,Ping Luo.Read the Docs Before Rewriting: Equip Rewriter with Domain Knowledge via Continual Pre-training[EB/OL].(2025-07-01)[2025-07-16].https://arxiv.org/abs/2507.00477.点此复制
评论