|国家预印本平台
首页|Safe Semantics, Unsafe Interpretations: Tackling Implicit Reasoning Safety in Large Vision-Language Models

Safe Semantics, Unsafe Interpretations: Tackling Implicit Reasoning Safety in Large Vision-Language Models

Safe Semantics, Unsafe Interpretations: Tackling Implicit Reasoning Safety in Large Vision-Language Models

来源:Arxiv_logoArxiv
英文摘要

Large Vision-Language Models face growing safety challenges with multimodal inputs. This paper introduces the concept of Implicit Reasoning Safety, a vulnerability in LVLMs. Benign combined inputs trigger unsafe LVLM outputs due to flawed or hidden reasoning. To showcase this, we developed Safe Semantics, Unsafe Interpretations, the first dataset for this critical issue. Our demonstrations show that even simple In-Context Learning with SSUI significantly mitigates these implicit multimodal threats, underscoring the urgent need to improve cross-modal implicit reasoning.

Wei Cai、Jian Zhao、Yuchu Jiang、Tianle Zhang、Xuelong Li

计算技术、计算机技术

Wei Cai,Jian Zhao,Yuchu Jiang,Tianle Zhang,Xuelong Li.Safe Semantics, Unsafe Interpretations: Tackling Implicit Reasoning Safety in Large Vision-Language Models[EB/OL].(2025-08-12)[2025-08-24].https://arxiv.org/abs/2508.08926.点此复制

评论