|国家预印本平台
首页|Improving Multilingual Language Models by Aligning Representations through Steering

Improving Multilingual Language Models by Aligning Representations through Steering

Improving Multilingual Language Models by Aligning Representations through Steering

来源:Arxiv_logoArxiv
英文摘要

In this paper, we investigate how large language models (LLMS) process non-English tokens within their layer representations, an open question despite significant advancements in the field. Using representation steering, specifically by adding a learned vector to a single model layer's activations, we demonstrate that steering a single model layer can notably enhance performance. Our analysis shows that this approach achieves results comparable to translation baselines and surpasses state of the art prompt optimization methods. Additionally, we highlight how advanced techniques like supervised fine tuning (\textsc{sft}) and reinforcement learning from human feedback (\textsc{rlhf}) improve multilingual capabilities by altering representation spaces. We further illustrate how these methods align with our approach to reshaping LLMS layer representations.

Omar Mahmoud、Buddhika Laknath Semage、Thommen George Karimpanal、Santu Rana

计算技术、计算机技术

Omar Mahmoud,Buddhika Laknath Semage,Thommen George Karimpanal,Santu Rana.Improving Multilingual Language Models by Aligning Representations through Steering[EB/OL].(2025-05-18)[2025-06-06].https://arxiv.org/abs/2505.12584.点此复制

评论