Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AppealMod: Inducing Friction to Reduce Moderator Workload of Handling User Appeals (2301.07163v2)

Published 17 Jan 2023 in cs.CY and cs.HC

Abstract: As content moderation becomes a central aspect of all social media platforms and online communities, interest has grown in how to make moderation decisions contestable. On social media platforms where individual communities moderate their own activities, the responsibility to address user appeals falls on volunteers from within the community. While there is a growing body of work devoted to understanding and supporting the volunteer moderators' workload, little is known about their practice of handling user appeals. Through a collaborative and iterative design process with Reddit moderators, we found that moderators spend considerable effort in investigating user ban appeals and desired to directly engage with users and retain their agency over each decision. To fulfill their needs, we designed and built AppealMod, a system that induces friction in the appeals process by asking users to provide additional information before their appeals are reviewed by human moderators. In addition to giving moderators more information, we expected the friction in the appeal process would lead to a selection effect among users, with many insincere and toxic appeals being abandoned before getting any attention from human moderators. To evaluate our system, we conducted a randomized field experiment in a Reddit community of over 29 million users that lasted for four months. As a result of the selection effect, moderators viewed only 30% of initial appeals and less than 10% of the toxically worded appeals; yet they granted roughly the same number of appeals when compared with the control group. Overall, our system is effective at reducing moderator workload and minimizing their exposure to toxic content while honoring their preference for direct engagement and agency in appeals.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Marco Almada. 2019. Human intervention in automated decision-making: Toward the construction of contestable systems. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 2–11.
  2. Edgar Alvarez. 2019. Instagram will soon let you appeal post takedowns — Engadget — engadget.com. https://www.engadget.com/2019-05-07-instagram-appeals-content-review-taken-down-posts.html. [Accessed 06-Jan-2023].
  3. Classification and Its Consequences for Online Harassment: Design Insights from HeartMob. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 24 (dec 2017), 19 pages. https://doi.org/10.1145/3134659
  4. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/3290605.3300234
  5. Jie Cai and Donghee Yvette Wohn. 2019. Categorizing Live Streaming Moderation Tools: An Analysis of Twitch. International Journal of Interactive Communication Systems and Technologies 9, 2 (July 2019), 36–50. https://doi.org/10.4018/IJICST.2019070103
  6. Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1–30. https://doi.org/10.1145/3359276
  7. Quarantined! Examining the effects of a community-wide moderation intervention on Reddit. ACM Transactions on Computer-Human Interaction (TOCHI) 29, 4 (2022), 1–26.
  8. Alexander Cheves. 2018. The Dangerous Trend of LGBTQ+ Censorship on the Internet. https://www.out.com/out-exclusives/2018/12/06/dangerous-trend-lgbtq-censorship-internet. [Accessed 06-Jan-2023].
  9. Bryan Dosono and Bryan Semaan. 2019. Moderation Practices as Emotional Labor in Sustaining Online Communities: The Case of AAPI Identity Work on Reddit. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300372
  10. Cynthia Dwork and Moni Naor. 1992. Pricing via processing or combatting junk mail. In Annual international cryptology conference. Springer, 139–147.
  11. The long tail of social networking.: Revenue models of social networking sites. European Management Journal 26, 3 (2008), 199–211.
  12. Michael P Fay and Michael A Proschan. 2010. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics surveys 4 (2010), 1.
  13. Sarah A. Gilbert. 2020. ”I Run the World’s Largest Historical Outreach Project and It’s on a Cesspool of a Website.” Moderating a Public Scholarship Site on Reddit: A Case Study of r/AskHistorians. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (May 2020), 1–27. https://doi.org/10.1145/3392822
  14. Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, New Haven, UNITED STATES.
  15. James Grimmelmann. 2017. The Virtues of Moderation. Preprint. LawArXiv. https://doi.org/10.31228/osf.io/qwxf5
  16. Disproportionate removals and differing content moderation experiences for conservative, transgender, and black social media users: Marginalization and moderation gray areas. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–35.
  17. Jeffrey Heer. 2019. Agency plus Automation: Designing Artificial Intelligence into Interactive Systems. Proceedings of the National Academy of Sciences 116, 6 (Feb. 2019), 1844–1850. https://doi.org/10.1073/pnas.1807184115
  18. Amanda Holpuch. 2015. Facebook still suspending Native Americans over “real name” policy. The Guardian 16 (2015). https://www.theguardian.com/technology/2015/feb/16/facebook-real-name-policy-suspends-native-americans
  19. Judith A Holton. 2007. The coding process and its challenges. The Sage handbook of grounded theory 3 (2007), 265–289.
  20. Macartan Humphreys. 2015. Reflections on the ethics of social experimentation. Journal of Globalization and Development 6, 1 (2015), 87–112. https://www.degruyter.com/document/doi/10.1515/jgd-2014-0016/html
  21. Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376383
  22. Reddit Inc. 2021. Transparency Report 2021 - Reddit. https://www.redditinc.com/policies/transparency-report-2021-2/. [Accessed 14-Jan-2023].
  23. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Transactions on Computer-Human Interaction 26, 5 (Oct. 2019), 1–35. https://doi.org/10.1145/3338243
  24. Designing Word Filter Tools for Creator-led Comment Moderation. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–21. https://doi.org/10.1145/3491102.3517505
  25. Aaron Jialun Jiang. 2020. Toward A Multi-stakeholder Perspective For Improving Online Content Moderation. (2020).
  26. A Trade-off-centered Framework of Content Moderation. arXiv preprint arXiv:2206.03450 (2022).
  27. A Trade-off-centered Framework of Content Moderation. https://doi.org/10.1145/3534929 arXiv:2206.03450 [cs]
  28. Hannes Grassegger Julia Angwin. 2017. Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children. https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms. [Accessed 06-Jan-2023].
  29. Robert E. Kraut and Paul Resnick. 2012. Building Successful Online Communities: Evidence-Based Social Design. https://doi.org/10.7551/mitpress/8472.001.0001
  30. Gerald S Leventhal. 1976. What Should be Done Equity Theory? New Approaches to the Study of Fairness in Social Relations. Washington DC: National Science Foundation (1976).
  31. Karyne Levy. 2014. Facebook Apologizes for ‘Real Name’Policy That Forced Drag Queens To Change Their Profiles. Business Insider 1 (2014). https://www.businessinsider.com/facebook-apologizes-for-real-name-policy-2014-10
  32. All That’s Happening behind the Scenes: Putting the Spotlight on Volunteer Moderator Labor in Reddit. Proceedings of the International AAAI Conference on Web and Social Media 16 (May 2022), 584–595. https://doi.org/10.1609/icwsm.v16i1.19317
  33. E Allan Lind and Tom R Tyler. 1988. The social psychology of procedural justice. Springer Science & Business Media.
  34. Forecasting the Presence and Intensity of Hostility on Instagram Using Linguistic and Social Features. arXiv:1804.06759 [cs] (April 2018). arXiv:1804.06759 [cs]
  35. Claudia (Claudia Wai Yu) Lo. 2018. When All You Have Is a Banhammer : The Social and Communicative Work of Volunteer Moderators. Thesis. Massachusetts Institute of Technology.
  36. Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–25.
  37. Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174160
  38. Alice E Marwick and Ross Miller. 2014. Online harassment, defamation, and hateful speech: A primer of the legal landscape. Fordham Center on Law and Information Policy Report 2 (2014). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2447904
  39. J. Nathan Matias. 2016. Going Dark: Social Factors in Collective Action Against Platform Operators in the Reddit Blackout. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). Association for Computing Machinery, New York, NY, USA, 1138–1151. https://doi.org/10.1145/2858036.2858391
  40. J. Nathan Matias. 2019. The Civic Labor of Volunteer Moderators Online. Social Media + Society 5, 2 (April 2019), 205630511983677. https://doi.org/10.1177/2056305119836778
  41. Sharan B Merriam et al. 2002. Introduction to qualitative research. Qualitative research in practice: Examples for discussion and analysis 1, 1 (2002), 1–17.
  42. Charlie Mitchell. 2022. Zendesk Research: Customers Are Still Frustrated with Chatbots. https://www.cxtoday.com/speech-analytics/customers-frustrated-with-chatbots/. [Accessed 12-Jan-2023].
  43. Sarah Myers West. 2018. Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms. New Media & Society 20, 11 (Nov. 2018), 4366–4383. https://doi.org/10.1177/1461444818773059
  44. Gal Oestreicher-Singer and Lior Zalmanson. 2013. Content or community? A digital business strategy for content providers in the social age. MIS quarterly (2013), 591–616.
  45. Comparing the Perceived Legitimacy of Content Moderation Processes: Contractors, Algorithms, Expert Panels, and Digital Juries. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022), 1–31.
  46. Sarah Perez. 2019. Twitter now lets users appeal violations within its app. https://techcrunch.com/2019/04/02/twitter-now-lets-users-appeal-violations-within-its-app/. [Accessed 06-Jan-2023].
  47. Sarah T Roberts. 2019. Behind the screen. Yale University Press.
  48. Claudio Sarra. 2020. Put dialectics into the machine: protection against automatic-decision-making through a deeper understanding of contestability by design. Global Jurist 20, 3 (2020), 20200003.
  49. Why Do Volunteer Content Moderators Quit? Burnout, Conflict, and Harmful Behaviors. New Media & Society (Dec. 2022), 14614448221138529. https://doi.org/10.1177/14614448221138529
  50. Joseph Seering and Sanjay R Kairam. 2023. Who Moderates on Twitch and What Do They Do? Quantifying Practices in Community Moderation on Twitch. Proceedings of the ACM on Human-Computer Interaction 7, GROUP (2023), 1–18.
  51. Metaphors in moderation. New Media & Society 24, 3 (2022), 621–640.
  52. Moderator Engagement and Community Development in the Age of Algorithms. New Media & Society 21, 7 (July 2019), 1417–1443. https://doi.org/10.1177/1461444818821316
  53. The psychological well-being of content moderators: the emotional labor of commercial moderation and avenues for improving support. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–14.
  54. The changing digital content landscape: An evaluation of e-business model development in European online news and music. Internet research (2006).
  55. ”At the End of the Day Facebook Does What ItWants”: How Users Experience Contesting Algorithmic Content Moderation. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 167:1–167:22. https://doi.org/10.1145/3415238
  56. Contestability For Content Moderation. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 318:1–318:28. https://doi.org/10.1145/3476059
  57. Donghee Yvette Wohn. 2019. Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300390
  58. Ian Wren. 2018. Facebook Updates Community Standards, Expands Appeals Process — npr.org. https://www.npr.org/2018/04/24/605107093/facebook-updates-community-standards-expands-appeals-process. [Accessed 06-Jan-2023].
  59. Lucas Wright. 2022. Automated Platform Governance Through Visibility and Scale: On the Transformational Power of AutoModerator. Social Media+ Society 8, 1 (2022), 20563051221077020.
  60. PolicyKit: Building Governance in Online Communities. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, Virtual Event USA, 365–378. https://doi.org/10.1145/3379337.3415858
  61. Jonathan Zong and J Nathan Matias. 2022. Bartleby: Procedural and Substantive Ethics in the Design of Research Ethics Systems. Social Media+ Society 8, 1 (2022), 20563051221077021. https://journals.sagepub.com/doi/pdf/10.1177/20563051221077021
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shubham Atreja (8 papers)
  2. Jane Im (5 papers)
  3. Paul Resnick (13 papers)
  4. Libby Hemphill (33 papers)

Summary

We haven't generated a summary for this paper yet.