|国家预印本平台
首页|Steering LLM Reasoning Through Bias-Only Adaptation

Steering LLM Reasoning Through Bias-Only Adaptation

Steering LLM Reasoning Through Bias-Only Adaptation

来源:Arxiv_logoArxiv
英文摘要

Recent work on reasoning-oriented language models, exemplified by o1-like systems, suggests that reinforcement-learning (RL) finetuning does not create new capabilities but instead strengthens reasoning patterns already latent in the pretrained network. We test this claim by training steering vectors: layer-wise biases that additively amplify selected hidden features while leaving all original weights unchanged. Experiments on four base models across the GSM8K and MATH benchmarks show that steering vectors recover, and in several cases exceed, the accuracy of fully-tuned counterparts. This result supports the view that the required reasoning skills pre-exist in the base model. Further, logit-lens analysis reveals that the trained vectors consistently boost token groups linked to structured languages and logical connectors, providing an interpretable account that aligns with the demands of quantitative reasoning tasks.

Viacheslav Sinii、Alexey Gorbatovski、Artem Cherepanov、Boris Shaposhnikov、Nikita Balagansky、Daniil Gavrilov

计算技术、计算机技术

Viacheslav Sinii,Alexey Gorbatovski,Artem Cherepanov,Boris Shaposhnikov,Nikita Balagansky,Daniil Gavrilov.Steering LLM Reasoning Through Bias-Only Adaptation[EB/OL].(2025-05-24)[2025-06-07].https://arxiv.org/abs/2505.18706.点此复制

评论