Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (2312.04431v1)

Published 7 Dec 2023 in cs.SI and cs.CY

Abstract: The Digital Services Act (DSA) requires large social media platforms in the EU to provide clear and specific information whenever they remove or restrict access to certain content. These "Statements of Reasons" (SoRs) are collected in the DSA Transparency Database to ensure transparency and scrutiny of content moderation decisions of the providers of online platforms. In this work, we empirically analyze 156 million SoRs within an observation period of two months to provide an early look at content moderation decisions of social media platforms in the EU. Our empirical analysis yields the following main findings: (i) There are vast differences in the frequency of content moderation across platforms. For instance, TikTok performs more than 350 times more content moderation decisions per user than X/Twitter. (ii) Content moderation is most commonly applied for text and videos, whereas images and other content formats undergo moderation less frequently. (ii) The primary reasons for moderation include content falling outside the platform's scope of service, illegal/harmful speech, and pornography/sexualized content, with moderation of misinformation being relatively uncommon. (iii) The majority of rule-breaking content is detected and decided upon via automated means rather than manual intervention. However, X/Twitter reports that it relies solely on non-automated methods. (iv) There is significant variation in the content moderation actions taken across platforms. Altogether, our study implies inconsistencies in how social media platforms implement their obligations under the DSA -- resulting in a fragmented outcome that the DSA is meant to avoid. Our findings have important implications for regulators to clarify existing guidelines or lay out more specific rules that ensure common standards on how social media providers handle rule-breaking content on their platforms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Chunrong Ai and Edward C Norton. 2003. Interaction Terms in Logit and Probit Models. Economics Letters 80, 1 (2003), 123–129.
  2. Understanding the Effect of Deplatforming on Social Networks. In WebSci.
  3. Hunt Allcott and Matthew Gentzkow. 2017. Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives 31, 2 (2017), 211–236.
  4. Sinan Aral and Dean Eckles. 2019. Protecting Elections from Social Media Manipulation. Science 365, 6456 (2019), 858–861.
  5. Exposure to Ideologically Diverse News and Opinion on facebook. Science 348, 6239 (2015), 1130–1132.
  6. New Threats to Society from Free-speech Social Media Platforms. Commun. ACM 66, 10 (2023), 37–40.
  7. W Bennett and Steven Livingston. 2020. The Disinformation Age. Cambridge University Press.
  8. Leticia Bode and Emily K Vraga. 2015. In Related News, That Was Wrong: the Correction of Misinformation Through Related Stories Functionality in Social Media. Journal of Communication 65, 4 (2015), 619–638.
  9. Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate. American Journal of Public Health 108, 10 (2018), 1378–1384.
  10. Finding Qs: Profiling Qanon Supporters on Parler. ICWSM (2023).
  11. Caroline Cauffman and Catalina Goanta. 2021. A New Order: the Digital Services Act and Consumer Protection. European Journal of Risk Regulation 12, 4 (2021), 758–774.
  12. Jonathan Chang and Cristian Danescu-Niculescu-Mizil. 2019. Trajectories of Blocked Community Members: Redemption, Recidivism and Departure. In WWW.
  13. Real Solutions for Fake News? measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior 42 (2020), 1073–1095.
  14. Chiara Drolsbach and Nicolas Pröllochs. 2023. Diffusion of Community Fact-checked Misinformation on twitter. In CSCW.
  15. European Commission. 2023. DSA Transparency Database. https://transparency.dsa.ec.europa.eu/. Accessed: 2023-11-23.
  16. Research Can Help to Tackle AI-generated Disinformation. Nature Human Behaviour 7 (2023), 1818–1821.
  17. Assessing the Risks of ’Infodemics’ in Response to Covid-19 Epidemics. Nature Human Behaviour 4, 12 (2020), 1285–1293.
  18. Russian Propaganda on Social Media During the 2022 Invasion of ukraine. EPJ Data Science 12, 1 (2023), 1–20.
  19. Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
  20. Tarleton Gillespie. 2020. Content Moderation, AI, and the Question of Scale. Big Data & Society 7, 2 (2020), 2053951720943234.
  21. Eric Goldman. 2021. Content Moderation Remedies. Michigan Technology Law Review 28 (2021), 1–60.
  22. James Grimmelmann. 2015. The Virtues of Moderation. Yale Journal of Law and Technology 17, 1 (2015).
  23. Fake News on Twitter during the 2016 U.S. presidential election. Science 363, 6425 (2019), 374–378.
  24. Exposure to Untrustworthy Websites in the 2016 us election. Nature Human Behaviour 4, 5 (2020), 472–480.
  25. Automated Content Moderation Increases Adherence to Community Guidelines. In WWW.
  26. Online Emotions During the Storming of the us Capitol: Evidence from the social media network Parler. ICWSM (2023).
  27. Human-machine Collaboration for Content Regulation: the Case of Reddit Automoderator. Transactions on Computer-Human Interaction 26, 5 (2019), 1–35.
  28. A Trade-off-centered Framework of Content Moderation. ACM Transactions on Computer-Human Interaction 30, 1 (2023).
  29. Reconsidering Tweets: Intervening During Tweet Creation Decreases Offensive Content. In ICWSM.
  30. Kate Klonick. 2017. The New Governors: the People, Rules, and Processes Governing Online Speech. Harvard Law Review 131 (2017), 1598.
  31. Paddy Leerssen. 2023. An End to Shadow Banning? Transparency Rights in the Digital Services Act Between Content Moderation and Curation. Computer Law & Security Review 48 (2023), 105790.
  32. ”Learn the Facts about Covid-19”: Analyzing the Use of Warning Labels on TikTok Videos. In ICWSM.
  33. LinkedIn. 2023. How Enforcement Technology Works. https://transparency.fb.com/en-gb/enforcement/detecting-violations/how-enforcement-technology-works/. Accessed: 2023-11-23.
  34. Spread of Hate Speech in Online Social Media. In WebSci.
  35. J Nathan Matias. 2016. The Civic Labor of Online Moderators. In Internet Politics and Policy Conference.
  36. Paul Mena. 2020. Cleaning Up Social Media: the Effect of Warning Labels on Likelihood of Sharing False News on facebook. Policy & Internet 12, 2 (2020), 165–183.
  37. Meta. 2023a. How Enforcement Technology Works. https://transparency.fb.com/en-gb/enforcement/detecting-violations/how-enforcement-technology-works/. Accessed: 2023-11-23.
  38. Meta. 2023b. Regulation (EU) 2022/2065 Digital Services Act - Transparency Report for Facebook. https://transparency.fb.com/sr/dsa-transparency-report-oct2023-facebook/. Accessed: 2023-11-23.
  39. Meta. 2023c. Regulation (EU) 2022/2065 Digital Services Act - Transparency Report for Instagram. https://transparency.fb.com/sr/dsa-transparency-report-oct2023-instagram/. Accessed: 2023-11-23.
  40. Removal of Anti-vaccine Content Impacts Social Media Discourse. In WebSci.
  41. Exposure to Untrustworthy Websites in the 2020 us Election. Nature Human Behaviour 7 (2023), 1096–1105.
  42. Appealing to Sense and Sensibility: System 1 and System 2 Interventions for Fake News on Social Media. Information Systems Research 31, 3 (2020), 987–1006.
  43. Karsten Müller and Carlo Schwarz. 2021. Fanning the Flames of Hate: Social Media and Hate Crime. Journal of the European Economic Association 19, 4 (2021), 2131–2167.
  44. Social Media Information Benefits, Knowledge Management and Smart Organizations. Journal of Business Research 94 (2019), 264–272.
  45. Community Intelligence and Social Media Services: A Rumor Theoretic Analysis of Tweets During Social Crises. MIS Quarterly 37, 2 (2013), 407–426.
  46. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science 66, 11 (2020), 4944–4957.
  47. Nicolas Pröllochs. 2022. Community-based Fact-checking on Twitter’s Birdwatch platform. In ICWSM.
  48. Nicolas Pröllochs and Stefan Feuerriegel. 2023. Mechanisms of True and False Rumor Sharing in Social Media: Collective Intelligence or Herd Behavior?. In CSCW.
  49. Sarah T Roberts. 2020. Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
  50. The Impact of Fake News on Social Media and Its Influence on Health During the covid-19 pandemic: A systematic review. Journal of Public Health 31 (2021), 1007–1016.
  51. Susceptibility to Misinformation About Covid-19 around the World. Royal Society Open Science 7, 10 (2020), 201199.
  52. Understanding Online Migration Decisions Following the Banning of Radical Communities. In WebSci.
  53. Gabi Schlag. 2023. European Union’s Regulating of Social Media: a Discourse Analysis of the Digital Services Act. Politics and Governance 11, 3 (2023), 168–177.
  54. Snapchat. 2023. Transparency Report. https://values.snap.com/de-DE/privacy/transparency/. Accessed: 2023-12-04.
  55. Kirill Solovev and Nicolas Pröllochs. 2022. Hate Speech in the Political Discourse on Social Media: Disparities Across Parties, Gender, and Ethnicity. In WWW.
  56. Kirill Solovev and Nicolas Pröllochs. 2023. Moralized Language Predicts Hate Speech on Social Media. PNAS Nexus 2, 1 (2023), pgac281.
  57. The European Parliament and the Council of the European Union. 2022. Regulation (eu) 2022/2065 on a Single Market for Digital Services and Amending Directive 2000/31/ed (digital Services Act). https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX:32022R2065.
  58. TikTok. 2023a. Community Guidelines Enforcement Report. https://www.tiktok.com/transparency/en/community-guidelines-enforcement-2023-2/. Accessed: 2023-11-23.
  59. TikTok. 2023b. Tiktok’s DSA Transparency Report 2023. https://www.tiktok.com/transparency/en/dsa-transparency/. Accessed: 2023-12-04.
  60. The Digital Services Act: an Analysis of Its Ethical, Legal, and Social Implications. Law, Innovation and Technology 15, 1 (2023), 83–106.
  61. Sonja Utz. 2016. Is Linkedin Making You More Successful? the Informational Benefits Derived from Public Social Media. New Media & Society 18, 11 (2016), 2685–2702.
  62. X. 2023a. X Is Commited to the Open Exchange of Information. https://transparency.twitter.com/en.html. Accessed: 2023-11-23.
  63. X. 2023b. X’s DSA Transparency Report. https://transparency.twitter.com/dsa-transparency-report.html. Accessed: 2023-12-04.
  64. X. 2023c. X’s Sensitive Media Policy. https://help.twitter.com/en/rules-and-policies/media-policy/. Accessed: 2023-12-04.
  65. Youtube. 2023. Government Requests Report. https://about.linkedin.com/transparency/government-requests-report/. Accessed: 2023-12-04.
  66. Savvas Zannettou. 2021. ”I won the Election!”: An Empirical Analysis of Soft Moderation Interventions on Twitter. In ICWSM.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Chiara Drolsbach (4 papers)
  2. Nicolas Pröllochs (30 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com