Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers
Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers
A central question in multilingual language modeling is whether large language models (LLMs) develop a universal concept representation, disentangled from specific languages. In this paper, we address this question by analyzing latent representations (latents) during a word-translation task in transformer-based LLMs. We strategically extract latents from a source translation prompt and insert them into the forward pass on a target translation prompt. By doing so, we find that the output language is encoded in the latent at an earlier layer than the concept to be translated. Building on this insight, we conduct two key experiments. First, we demonstrate that we can change the concept without changing the language and vice versa through activation patching alone. Second, we show that patching with the mean representation of a concept across different languages does not affect the models' ability to translate it, but instead improves it. Finally, we generalize to multi-token generation and demonstrate that the model can generate natural language description of those mean representations. Our results provide evidence for the existence of language-agnostic concept representations within the investigated models.
Clément Dumas、Chris Wendler、Veniamin Veselovsky、Giovanni Monea、Robert West
语言学
Clément Dumas,Chris Wendler,Veniamin Veselovsky,Giovanni Monea,Robert West.Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers[EB/OL].(2025-06-25)[2025-07-16].https://arxiv.org/abs/2411.08745.点此复制
评论