|国家预印本平台
首页|Exploring the Secondary Risks of Large Language Models

Exploring the Secondary Risks of Large Language Models

Exploring the Secondary Risks of Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Ensuring the safety and alignment of Large Language Models is a significant challenge with their growing integration into critical applications and societal functions. While prior research has primarily focused on jailbreak attacks, less attention has been given to non-adversarial failures that subtly emerge during benign interactions. We introduce secondary risks a novel class of failure modes marked by harmful or misleading behaviors during benign prompts. Unlike adversarial attacks, these risks stem from imperfect generalization and often evade standard safety mechanisms. To enable systematic evaluation, we introduce two risk primitives verbose response and speculative advice that capture the core failure patterns. Building on these definitions, we propose SecLens, a black-box, multi-objective search framework that efficiently elicits secondary risk behaviors by optimizing task relevance, risk activation, and linguistic plausibility. To support reproducible evaluation, we release SecRiskBench, a benchmark dataset of 650 prompts covering eight diverse real-world risk categories. Experimental results from extensive evaluations on 16 popular models demonstrate that secondary risks are widespread, transferable across models, and modality independent, emphasizing the urgent need for enhanced safety mechanisms to address benign yet harmful LLM behaviors in real-world deployments.

Jiawei Chen、Zhengwei Fang、Xiao Yang、Chao Yu、Zhaoxia Yin、Hang Su

计算技术、计算机技术

Jiawei Chen,Zhengwei Fang,Xiao Yang,Chao Yu,Zhaoxia Yin,Hang Su.Exploring the Secondary Risks of Large Language Models[EB/OL].(2025-06-21)[2025-07-01].https://arxiv.org/abs/2506.12382.点此复制

评论