Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations
Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations
Changing the behavior of large language models (LLMs) can be as straightforward as editing the Transformer's residual streams using appropriately constructed "steering vectors." These modifications to internal neural activations, a form of representation engineering, offer an effective and targeted means of influencing model behavior without retraining or fine-tuning the model. But how can such steering vectors be systematically identified? We propose a principled approach for uncovering steering vectors by aligning latent representations elicited through behavioral methods (specifically, Markov chain Monte Carlo with LLMs) with their neural counterparts. To evaluate this approach, we focus on extracting latent risk preferences from LLMs and steering their risk-related outputs using the aligned representations as steering vectors. We show that the resulting steering vectors successfully and reliably modulate LLM outputs in line with the targeted behavior.
Jian-Qiao Zhu、Haijiang Yan、Thomas L. Griffiths
计算技术、计算机技术
Jian-Qiao Zhu,Haijiang Yan,Thomas L. Griffiths.Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations[EB/OL].(2025-05-16)[2025-06-08].https://arxiv.org/abs/2505.11615.点此复制
评论