|国家预印本平台
首页|ContextBench: Modifying Contexts for Targeted Latent Activation

ContextBench: Modifying Contexts for Targeted Latent Activation

ContextBench: Modifying Contexts for Targeted Latent Activation

来源:Arxiv_logoArxiv
英文摘要

Identifying inputs that trigger specific behaviours or latent features in language models could have a wide range of safety use cases. We investigate a class of methods capable of generating targeted, linguistically fluent inputs that activate specific latent features or elicit model behaviours. We formalise this approach as context modification and present ContextBench -- a benchmark with tasks assessing core method capabilities and potential safety applications. Our evaluation framework measures both elicitation strength (activation of latent features or behaviours) and linguistic fluency, highlighting how current state-of-the-art methods struggle to balance these objectives. We enhance Evolutionary Prompt Optimisation (EPO) with LLM-assistance and diffusion model inpainting, and demonstrate that these variants achieve state-of-the-art performance in balancing elicitation effectiveness and fluency.

Robert Graham、Edward Stevinson、Leo Richter、Alexander Chia、Joseph Miller、Joseph Isaac Bloom

计算技术、计算机技术

Robert Graham,Edward Stevinson,Leo Richter,Alexander Chia,Joseph Miller,Joseph Isaac Bloom.ContextBench: Modifying Contexts for Targeted Latent Activation[EB/OL].(2025-06-15)[2025-06-29].https://arxiv.org/abs/2506.15735.点此复制

评论