|国家预印本平台
首页|Discovering Forbidden Topics in Language Models

Discovering Forbidden Topics in Language Models

Discovering Forbidden Topics in Language Models

来源:Arxiv_logoArxiv
英文摘要

Refusal discovery is the task of identifying the full set of topics that a language model refuses to discuss. We introduce this new problem setting and develop a refusal discovery method, Iterated Prefill Crawler (IPC), that uses token prefilling to find forbidden topics. We benchmark IPC on Tulu-3-8B, an open-source model with public safety tuning data. Our crawler manages to retrieve 31 out of 36 topics within a budget of 1000 prompts. Next, we scale the crawler to a frontier model using the prefilling option of Claude-Haiku. Finally, we crawl three widely used open-weight models: Llama-3.3-70B and two of its variants finetuned for reasoning: DeepSeek-R1-70B and Perplexity-R1-1776-70B. DeepSeek-R1-70B reveals patterns consistent with censorship tuning: The model exhibits "thought suppression" behavior that indicates memorization of CCP-aligned responses. Although Perplexity-R1-1776-70B is robust to censorship, IPC elicits CCP-aligned refusals answers in the quantized model. Our findings highlight the critical need for refusal discovery methods to detect biases, boundaries, and alignment failures of AI systems.

Can Rager、Chris Wendler、Rohit Gandikota、David Bau

计算技术、计算机技术

Can Rager,Chris Wendler,Rohit Gandikota,David Bau.Discovering Forbidden Topics in Language Models[EB/OL].(2025-05-22)[2025-06-14].https://arxiv.org/abs/2505.17441.点此复制

评论