|国家预印本平台
首页|RE-Adapt: Reverse Engineered Adaptation of Large Language Models

RE-Adapt: Reverse Engineered Adaptation of Large Language Models

RE-Adapt: Reverse Engineered Adaptation of Large Language Models

来源:Arxiv_logoArxiv
英文摘要

We introduce RE-Adapt, an approach to fine-tuning large language models on new domains without degrading any pre-existing instruction-tuning. We reverse engineer an adapter which isolates what an instruction-tuned model has learned beyond its corresponding pretrained base model. Importantly, this requires no additional data or training. We can then fine-tune the base model on a new domain and readapt it to instruction following with the reverse engineered adapter. RE-Adapt and our low-rank variant LoRE-Adapt both outperform other methods of fine-tuning, across multiple popular LLMs and datasets, even when the models are used in conjunction with retrieval-augmented generation.

William Fleshman、Benjamin Van Durme

计算技术、计算机技术

William Fleshman,Benjamin Van Durme.RE-Adapt: Reverse Engineered Adaptation of Large Language Models[EB/OL].(2024-05-23)[2025-08-05].https://arxiv.org/abs/2405.15007.点此复制

评论