|国家预印本平台
首页|FreeLM: Fine-Tuning-Free Language Model

FreeLM: Fine-Tuning-Free Language Model

FreeLM: Fine-Tuning-Free Language Model

来源:Arxiv_logoArxiv
英文摘要

Pre-trained language models (PLMs) have achieved remarkable success in NLP tasks. Despite the great success, mainstream solutions largely follow the pre-training then finetuning paradigm, which brings in both high deployment costs and low training efficiency. Nevertheless, fine-tuning on a specific task is essential because PLMs are only pre-trained with language signal from large raw data. In this paper, we propose a novel fine-tuning-free strategy for language models, to consider both language signal and teacher signal. Teacher signal is an abstraction of a battery of downstream tasks, provided in a unified proposition format. Trained with both language and strong task-aware teacher signals in an interactive manner, our FreeLM model demonstrates strong generalization and robustness. FreeLM outperforms large models e.g., GPT-3 and InstructGPT, on a range of language understanding tasks in experiments. FreeLM is much smaller with 0.3B parameters, compared to 175B in these models.

Aixin Sun、Xuying Meng、Xin Jiang、Xiang Li、Yequan Wang

语言学

Aixin Sun,Xuying Meng,Xin Jiang,Xiang Li,Yequan Wang.FreeLM: Fine-Tuning-Free Language Model[EB/OL].(2023-05-02)[2025-05-28].https://arxiv.org/abs/2305.01616.点此复制

评论