|国家预印本平台
首页|Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models

Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models

Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models

来源:Arxiv_logoArxiv
英文摘要

A wide variety of natural language tasks are currently being addressed with large-scale language models (LLMs). These models are usually trained with a very large amount of unsupervised text data and adapted to perform a downstream natural language task using methods like fine-tuning, calibration or in-context learning. In this work, we propose an approach to adapt the prior class distribution to perform text classification tasks without the need for labelled samples and only few in-domain sample queries. The proposed approach treats the LLM as a black box, adding a stage where the model posteriors are calibrated to the task. Results show that these methods outperform the un-adapted model for different number of training shots in the prompt and a previous approach were calibration is performed without using any adaptation data.

Pablo Piantanida、Luciana Ferrer、Mat¨aas Vera、Lautaro Estienne

10.26615/issn.2603-2821.2023_002

计算技术、计算机技术

Pablo Piantanida,Luciana Ferrer,Mat¨aas Vera,Lautaro Estienne.Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models[EB/OL].(2023-07-13)[2025-08-23].https://arxiv.org/abs/2307.06713.点此复制

评论