|国家预印本平台
首页|SLOT: Sample-specific Language Model Optimization at Test-time

SLOT: Sample-specific Language Model Optimization at Test-time

SLOT: Sample-specific Language Model Optimization at Test-time

来源:Arxiv_logoArxiv
英文摘要

We propose SLOT (Sample-specific Language Model Optimization at Test-time), a novel and parameter-efficient test-time inference approach that enhances a language model's ability to more accurately respond to individual prompts. Existing Large Language Models (LLMs) often struggle with complex instructions, leading to poor performances on those not well represented among general samples. To address this, SLOT conducts few optimization steps at test-time to update a light-weight sample-specific parameter vector. It is added to the final hidden layer before the output head, and enables efficient adaptation by caching the last layer features during per-sample optimization. By minimizing the cross-entropy loss on the input prompt only, SLOT helps the model better aligned with and follow each given instruction. In experiments, we demonstrate that our method outperforms the compared models across multiple benchmarks and LLMs. For example, Qwen2.5-7B with SLOT achieves an accuracy gain of 8.6% on GSM8K from 57.54% to 66.19%, while DeepSeek-R1-Distill-Llama-70B with SLOT achieves a SOTA accuracy of 68.69% on GPQA among 70B-level models. Our code is available at https://github.com/maple-research-lab/SLOT.

Yang Hu、Xingyu Zhang、Xueji Fang、Zhiyang Chen、Xiao Wang、Huatian Zhang、Guojun Qi

计算技术、计算机技术

Yang Hu,Xingyu Zhang,Xueji Fang,Zhiyang Chen,Xiao Wang,Huatian Zhang,Guojun Qi.SLOT: Sample-specific Language Model Optimization at Test-time[EB/OL].(2025-05-18)[2025-06-22].https://arxiv.org/abs/2505.12392.点此复制

评论