|国家预印本平台
首页|Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) frequently demonstrate reasoning limitations, often conflating content plausibility (i.e., material inference) with logical validity (i.e., formal inference). This can result in biased inferences, where plausible arguments are incorrectly deemed logically valid or vice versa. Mitigating this limitation is critical, as it undermines the trustworthiness and generalizability of LLMs in applications that demand rigorous logical consistency. This paper investigates the problem of mitigating content biases on formal reasoning through activation steering. Specifically, we curate a controlled syllogistic reasoning dataset to disentangle formal validity from content plausibility. After localising the layers responsible for formal and material inference, we investigate contrastive activation steering methods for test-time interventions. An extensive empirical analysis on different LLMs reveals that contrastive steering consistently supports linear control over content biases. However, we observe that a static approach is insufficient for improving all the tested models. We then leverage the possibility to control content effects by dynamically determining the value of the steering parameters via fine-grained conditional methods. We found that conditional steering is effective on unresponsive models, achieving up to 15% absolute improvement in formal reasoning accuracy with a newly introduced kNN-based method (K-CAST). Finally, additional experiments reveal that steering for content effects is robust to prompt variations, incurs minimal side effects on language modeling capabilities, and can partially generalize to out-of-distribution reasoning tasks. Practically, this paper demonstrates that activation-level interventions can offer a scalable strategy for enhancing the robustness of LLMs, contributing towards more systematic and unbiased formal reasoning.

Marco Valentino、Geonhee Kim、Dhairya Dalal、Zhixue Zhao、André Freitas

计算技术、计算机技术

Marco Valentino,Geonhee Kim,Dhairya Dalal,Zhixue Zhao,André Freitas.Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering[EB/OL].(2025-05-17)[2025-06-09].https://arxiv.org/abs/2505.12189.点此复制

评论