Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tracking and Quantifying Censorship on a Chinese Microblogging Site (1211.6166v1)

Published 26 Nov 2012 in cs.IR and cs.CR

Abstract: We present measurements and analysis of censorship on Weibo, a popular microblogging site in China. Since we were limited in the rate at which we could download posts, we identified users likely to participate in sensitive topics and recursively followed their social contacts. We also leveraged new natural language processing techniques to pick out trending topics despite the use of neologisms, named entities, and informal language usage in Chinese social media. We found that Weibo dynamically adapts to the changing interests of its users through multiple layers of filtering. The filtering includes both retroactively searching posts by keyword or repost links to delete them, and rejecting posts as they are posted. The trend of sensitive topics is short-lived, suggesting that the censorship is effective in stopping the "viral" spread of sensitive issues. We also give evidence that sensitive topics in Weibo only scarcely propagate beyond a core of sensitive posters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tao Zhu (205 papers)
  2. David Phipps (3 papers)
  3. Adam Pridgen (3 papers)
  4. Jedidiah R. Crandall (9 papers)
  5. Dan S. Wallach (23 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.