From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs
From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs
Large Language Models (LLMs) exhibit strong conversational abilities but often generate falsehoods. Prior work suggests that the truthfulness of simple propositions can be represented as a single linear direction in a model's internal activations, but this may not fully capture its underlying geometry. In this work, we extend the concept cone framework, recently introduced for modeling refusal, to the domain of truth. We identify multi-dimensional cones that causally mediate truth-related behavior across multiple LLM families. Our results are supported by three lines of evidence: (i) causal interventions reliably flip model responses to factual statements, (ii) learned cones generalize across model architectures, and (iii) cone-based interventions preserve unrelated model behavior. These findings reveal the richer, multidirectional structure governing simple true/false propositions in LLMs and highlight concept cones as a promising tool for probing abstract behaviors.
Stanley Yu、Vaidehi Bulusu、Oscar Yasunaga、Clayton Lau、Cole Blondin、Sean O'Brien、Kevin Zhu、Vasu Sharma
计算技术、计算机技术
Stanley Yu,Vaidehi Bulusu,Oscar Yasunaga,Clayton Lau,Cole Blondin,Sean O'Brien,Kevin Zhu,Vasu Sharma.From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs[EB/OL].(2025-05-27)[2025-07-16].https://arxiv.org/abs/2505.21800.点此复制
评论