|国家预印本平台
首页|Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek

Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek

Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek

来源:Arxiv_logoArxiv
英文摘要

This study examines information suppression mechanisms in DeepSeek, an open-source large language model (LLM) developed in China. We propose an auditing framework and use it to analyze the model's responses to 646 politically sensitive prompts by comparing its final output with intermediate chain-of-thought (CoT) reasoning. Our audit unveils evidence of semantic-level information suppression in DeepSeek: sensitive content often appears within the model's internal reasoning but is omitted or rephrased in the final output. Specifically, DeepSeek suppresses references to transparency, government accountability, and civic mobilization, while occasionally amplifying language aligned with state propaganda. This study underscores the need for systematic auditing of alignment, content moderation, information suppression, and censorship practices implemented into widely-adopted AI models, to ensure transparency, accountability, and equitable access to unbiased information obtained by means of these systems.

Peiran Qiu、Siyi Zhou、Emilio Ferrara

计算技术、计算机技术

Peiran Qiu,Siyi Zhou,Emilio Ferrara.Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek[EB/OL].(2025-06-14)[2025-07-16].https://arxiv.org/abs/2506.12349.点此复制

评论