|国家预印本平台
首页|Simple Mechanistic Explanations for Out-Of-Context Reasoning

Simple Mechanistic Explanations for Out-Of-Context Reasoning

Simple Mechanistic Explanations for Out-Of-Context Reasoning

来源:Arxiv_logoArxiv
英文摘要

Out-of-context reasoning (OOCR) is a phenomenon in which fine-tuned LLMs exhibit surprisingly deep out-of-distribution generalization. Rather than learning shallow heuristics, they implicitly internalize and act on the consequences of observations scattered throughout the fine-tuning data. In this work, we investigate this phenomenon mechanistically and find that many instances of OOCR in the literature have a simple explanation: the LoRA fine-tuning essentially adds a constant steering vector, steering the model towards a general concept. This improves performance on the fine-tuning task and in many other concept-related domains, causing the surprising generalization. Moreover, we can directly train steering vectors for these tasks from scratch, which also induces OOCR. We find that our results hold even for a task that seems like it must involve conditional behavior (model backdoors); it turns out that unconditionally adding a steering vector is sufficient. Overall, our work presents one explanation of what gets learned during fine-tuning for OOCR tasks, contributing to the key question of why LLMs can reason out of context, an advanced capability that is highly relevant to their safe and reliable deployment.

Atticus Wang、Joshua Engels、Oliver Clive-Griffin

计算技术、计算机技术

Atticus Wang,Joshua Engels,Oliver Clive-Griffin.Simple Mechanistic Explanations for Out-Of-Context Reasoning[EB/OL].(2025-07-10)[2025-07-23].https://arxiv.org/abs/2507.08218.点此复制

评论