|国家预印本平台
首页|Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?

Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?

Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?

来源:Arxiv_logoArxiv
英文摘要

While metrics available during pre-training, such as perplexity, correlate well with model performance at scaling-laws studies, their predictive capacities at a fixed model size remain unclear, hindering effective model selection and development. To address this gap, we formulate the task of selecting pre-training checkpoints to maximize downstream fine-tuning performance as a pairwise classification problem: predicting which of two LLMs, differing in their pre-training, will perform better after supervised fine-tuning (SFT). We construct a dataset using 50 1B parameter LLM variants with systematically varied pre-training configurations, e.g., objectives or data, and evaluate them on diverse downstream tasks after SFT. We first conduct a study and demonstrate that the conventional perplexity is a misleading indicator. As such, we introduce novel unsupervised and supervised proxy metrics derived from pre-training that successfully reduce the relative performance prediction error rate by over 50%. Despite the inherent complexity of this task, we demonstrate the practical utility of our proposed proxies in specific scenarios, paving the way for more efficient design of pre-training schemes optimized for various downstream tasks.

Hansi Zeng、Kai Hui、Honglei Zhuang、Zhen Qin、Zhenrui Yue、Hamed Zamani、Dana Alon

计算技术、计算机技术

Hansi Zeng,Kai Hui,Honglei Zhuang,Zhen Qin,Zhenrui Yue,Hamed Zamani,Dana Alon.Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?[EB/OL].(2025-04-16)[2025-06-12].https://arxiv.org/abs/2504.12491.点此复制

评论