|国家预印本平台
首页|LMAR: Language Model Augmented Retriever for Domain-specific Knowledge Indexing

LMAR: Language Model Augmented Retriever for Domain-specific Knowledge Indexing

LMAR: Language Model Augmented Retriever for Domain-specific Knowledge Indexing

来源:Arxiv_logoArxiv
英文摘要

Retrieval Augmented Generation (RAG) systems often struggle with domain-specific knowledge due to performance deterioration of pre-trained embeddings and prohibitive computational costs of large language model (LLM)-based retrievers. While fine-tuning data augmentation embedding models offers a promising direction, its effectiveness is limited by the need for high-quality training data and reliable chunking strategies that preserve contextual integrity. We propose LMAR (Language Model Augmented Retriever), a model-agnostic framework that addresses these challenges by combining LLM-guided data synthesis with contrastive embedding adaptation and efficient text clustering. LMAR consists of a two-stage pipeline: (1) Triplet sampling and synthetic data augmentation, where LLMs act as both labeler and validator to ensure high-fidelity supervision throughout the pipeline. Experimental results across multiple domain-specific benchmark datasets demonstrate that LMAR outperforms multiple baseline models, while maintaining moderate hardware requirements and low latency. Its model-agnostic nature further enables seamless integration with emerging RAG architectures and text embedding models, ensuring continual improvements without redesigning the pipeline. These results highlight LMAR as a practical and cost-effective solution for scalable domain-specific adaptation.

Yao Zhao、Yantian Ding、Zhiyue Zhang、Dapeng Yao、Yanxun Xu

计算技术、计算机技术

Yao Zhao,Yantian Ding,Zhiyue Zhang,Dapeng Yao,Yanxun Xu.LMAR: Language Model Augmented Retriever for Domain-specific Knowledge Indexing[EB/OL].(2025-08-04)[2025-08-24].https://arxiv.org/abs/2508.05672.点此复制

评论