Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 180 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 42 tok/s Pro
GPT-4o 66 tok/s Pro
Kimi K2 163 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek (2506.12349v1)

Published 14 Jun 2025 in cs.CY, cs.AI, and cs.CL

Abstract: This study examines information suppression mechanisms in DeepSeek, an open-source LLM developed in China. We propose an auditing framework and use it to analyze the model's responses to 646 politically sensitive prompts by comparing its final output with intermediate chain-of-thought (CoT) reasoning. Our audit unveils evidence of semantic-level information suppression in DeepSeek: sensitive content often appears within the model's internal reasoning but is omitted or rephrased in the final output. Specifically, DeepSeek suppresses references to transparency, government accountability, and civic mobilization, while occasionally amplifying language aligned with state propaganda. This study underscores the need for systematic auditing of alignment, content moderation, information suppression, and censorship practices implemented into widely-adopted AI models, to ensure transparency, accountability, and equitable access to unbiased information obtained by means of these systems.

Summary

  • The paper establishes an auditing framework to compare DeepSeek's chain-of-thought reasoning with its final outputs for politically sensitive prompts.
  • It quantifies censorship with 1.9% outright refusals and 11.1% semantic divergences in responses related to governance and civic issues.
  • The findings reveal subtle semantic censorship practices, highlighting the need for standardized auditing tools for transparent AI governance.

Auditing Information Suppression in LLMs

The paper presents a critical examination of information suppression mechanisms in DeepSeek, a Chinese open-source LLM. The researchers develop a comprehensive auditing framework to analyze how DeepSeek responds to politically sensitive prompts, using semantic-level investigation of its chain-of-thought (CoT) reasoning and final outputs. The paper identifies significant evidence of semantic censorship, with references to government transparency, accountability, and civic mobilization often suppressed or omitted in the model's responses.

The audit process involves a dataset of 646 politically sensitive prompts, specifically selected to reflect topics historically censored within China's information ecosystem. The research seeks to unpack the mechanisms behind such censorship, delineating whether it arises primarily from internal model alignment or external moderation constraints. The scholars meticulously compare the model's CoT steps with its final output to highlight the discrepancy in information suppression at both superficial and deeper semantic levels.

Among the findings, 1.9% of turns exhibit type 1 censorship (outright refusal to provide an output), and 11.1% show type 2 censorship (semantic divergence, where the CoT contains relevant keywords that are absent in the final output). The frequencies of suppressed content remarkably correspond with topics critical of the Chinese political regime or those entailing calls for collective action.

The researchers reveal notable differences across episodic and thematic prompts, with episodic prompts triggering more censorship instances. Groups related to governance, social rights, and public health display pronounced semantic suppression, contrasting with technological and environmental groups that undergo less moderation.

This paper emphasizes the subtlety of modern censorship practices in LLMs, arguing they increasingly manifest at semantic levels rather than through overt content refusal. Such mechanisms threaten epistemic integrity by providing the illusion of comprehensive information while strategically omitting or misrepresenting key content inputs.

These findings raise significant ethical concerns regarding AI models' transparency and accountability, especially LLMs developed within heavily regulated digital contexts like China. The implications for researchers and policymakers are profound: there is an urgent need for standardized auditing tools that can detect covert forms of information suppression and help ensure equitable access to unbiased information.

Future research should expand on these methodologies to quantify the persuasive impact of embedded propaganda in LLM outputs and devise strategies to counteract these biases. The integration of countermeasures and transparency demands in AI governance could further contribute to the development of fairer and more trustworthy AI-mediated communication infrastructures.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 3 likes.

Upgrade to Pro to view all of the tweets about this paper: