|国家预印本平台
首页|Knowledge-Instruct: Effective Continual Pre-training from Limited Data using Instructions

Knowledge-Instruct: Effective Continual Pre-training from Limited Data using Instructions

Knowledge-Instruct: Effective Continual Pre-training from Limited Data using Instructions

来源:Arxiv_logoArxiv
英文摘要

While Large Language Models (LLMs) acquire vast knowledge during pre-training, they often lack domain-specific, new, or niche information. Continual pre-training (CPT) attempts to address this gap but suffers from catastrophic forgetting and inefficiencies in low-data regimes. We introduce Knowledge-Instruct, a novel approach to efficiently inject knowledge from limited corpora through pure instruction-tuning. By generating information-dense synthetic instruction data, it effectively integrates new knowledge while preserving general reasoning and instruction-following abilities. Knowledge-Instruct demonstrates superior factual memorization, minimizes catastrophic forgetting, and remains scalable by leveraging synthetic data from relatively small language models. Additionally, it enhances contextual understanding, including complex multi-hop reasoning, facilitating integration with retrieval systems. We validate its effectiveness across diverse benchmarks, including Companies, a new dataset that we release to measure knowledge injection capabilities.

Oded Ovadia、Meni Brief、Rachel Lemberg、Eitam Sheetrit

计算技术、计算机技术

Oded Ovadia,Meni Brief,Rachel Lemberg,Eitam Sheetrit.Knowledge-Instruct: Effective Continual Pre-training from Limited Data using Instructions[EB/OL].(2025-04-07)[2025-04-27].https://arxiv.org/abs/2504.05571.点此复制

评论