|国家预印本平台
首页|APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training

APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training

APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) often require domain-specific fine-tuning to address targeted tasks, which risks degrading their general capabilities. Maintaining a balance between domain-specific enhancements and general model utility is a key challenge. This paper proposes a novel approach named APT (Weakness Case Acquisition and Iterative Preference Training) to enhance domain-specific performance with self-generated dis-preferred weakness data (bad cases and similar cases). APT uniquely focuses on training the model using only those samples where errors occur, alongside a small, similar set of samples retrieved for this purpose. This targeted training minimizes interference with the model's existing knowledge base, effectively retaining generic capabilities. Experimental results on the LLama-2 and Mistral-V0.3 models across various benchmarks demonstrate that APT ensures no reduction in generic capacity and achieves superior performance on downstream tasks compared to various existing methods. This validates our method as an effective strategy for enhancing domain-specific capabilities without sacrificing the model's broader applicability.

Jun Rao、Zepeng Lin、Xuebo Liu、Xiaopeng Ke、Lian Lian、Dong Jin、Shengjun Cheng、Jun Yu、Min Zhang

计算技术、计算机技术

Jun Rao,Zepeng Lin,Xuebo Liu,Xiaopeng Ke,Lian Lian,Dong Jin,Shengjun Cheng,Jun Yu,Min Zhang.APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training[EB/OL].(2025-06-03)[2025-07-01].https://arxiv.org/abs/2506.03483.点此复制

评论