|国家预印本平台
首页|Do LLMs Understand the Safety of Their Inputs? Training-Free Moderation via Latent Prototypes

Do LLMs Understand the Safety of Their Inputs? Training-Free Moderation via Latent Prototypes

Do LLMs Understand the Safety of Their Inputs? Training-Free Moderation via Latent Prototypes

来源:Arxiv_logoArxiv
英文摘要

With the rise of LLMs, ensuring model safety and alignment has become a critical concern. While modern instruction-finetuned LLMs incorporate alignment during training, they still frequently require moderation tools to prevent unsafe behavior. The most common approach to moderation are guard models that flag unsafe inputs. However, guards require costly training and are typically limited to fixed-size, pre-trained options, making them difficult to adapt to evolving risks and resource constraints. We hypothesize that instruction-finetuned LLMs already encode safety-relevant information internally and explore training-free safety assessment methods that work with off-the-shelf models. We show that simple prompting allows models to recognize harmful inputs they would otherwise mishandle. We also demonstrate that safe and unsafe prompts are distinctly separable in the models' latent space. Building on this, we introduce the Latent Prototype Moderator (LPM), a training-free moderation method that uses Mahalanobis distance in latent space to assess input safety. LPM is a lightweight, customizable add-on that generalizes across model families and sizes. Our method matches or exceeds state-of-the-art guard models across multiple safety benchmarks, offering a practical and flexible solution for scalable LLM moderation.

Filip Szatkowski、Jan Dubiński、Maciej Chrabąszcz、Bartosz Wójcik、Tomasz Trzciński、Sebastian Cygert

计算技术、计算机技术

Filip Szatkowski,Jan Dubiński,Maciej Chrabąszcz,Bartosz Wójcik,Tomasz Trzciński,Sebastian Cygert.Do LLMs Understand the Safety of Their Inputs? Training-Free Moderation via Latent Prototypes[EB/OL].(2025-07-07)[2025-07-19].https://arxiv.org/abs/2502.16174.点此复制

评论