Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Browser Extension for in-place Signaling and Assessment of Misinformation (2403.11485v1)

Published 18 Mar 2024 in cs.HC
A Browser Extension for in-place Signaling and Assessment of Misinformation

Abstract: The status-quo of misinformation moderation is a central authority, usually social platforms, deciding what content constitutes misinformation and how it should be handled. However, to preserve users' autonomy, researchers have explored democratized misinformation moderation. One proposition is to enable users to assess content accuracy and specify whose assessments they trust. We explore how these affordances can be provided on the web, without cooperation from the platforms where users consume content. We present a browser extension that empowers users to assess the accuracy of any content on the web and shows the user assessments from their trusted sources in-situ. Through a two-week user study, we report on how users perceive such a tool, the kind of content users want to assess, and the rationales they use in their assessments. We identify implications for designing tools that enable users to moderate content for themselves with the help of those they trust.

Democratizing Misinformation Moderation: A Case Study of a Browser Extension for In-Place Content Assessment

Introduction

The proliferation of misinformation on the web has spurred significant efforts towards identifying and mitigating its spread. Traditional strategies predominantly center around centralized moderation by platform operators or third-party fact-checkers, raising concerns regarding autonomy, bias, and the breadth of content being moderated. In exploring alternative approaches, this paper introduces a browser extension aimed at democratizing the process of content moderation. This platform-agnostic tool empowers users to assess content accuracy across the web and view assessments from their chosen trusted sources directly where the content is consumed.

Design of the Tool

The Trustnet browser extension offers a novel approach by allowing users to both submit and view assessments of content accuracy in-situ. Content can be marked as "accurate," "inaccurate," or flagged for further inquiry directly within the user interface of the extension. The extension's color-coded feedback system—green for accurate, red for inaccurate, and orange for disputed content—provides clear and immediate visual cues regarding content credibility as assessed by the user's trusted network. This user paper confirmed the feasibility of such a tool in expanding the scope of content subject to moderation beyond the limits of centralized fact-checking infrastructure, covering a wide array of sources and types of content, from news articles and social media posts to YouTube videos.

Incentivizing and Trust-Building in Moderation

A central challenge in democratized moderation systems lies in incentivizing user participation and fostering a trusted environment for content assessments. The paper participants underscored the importance of a reputation system and community engagement as significant motivators for contribution. Additionally, users expressed a preference for mechanisms that ensure assessors' credibility, such as displaying political leanings, biases, and relevant credentials. Addressing concerns about abuse and bias requires a comprehensive approach, combining technological solutions with community governance to ensure the integrity of the moderation process.

Democratized Moderation: Challenges and Opportunities

While the browser extension exemplifies the potential of democratized content moderation, several challenges warrant further attention. Users indicated the need for user-specific customization options, suggesting that one-size-fits-all solutions to content labeling and actions might not satisfy diverse user preferences. Another notable consideration is the extension's limitation to desktop usage, excluding a significant portion of web users who primarily access content via mobile devices.

Future Directions

This case paper paves the way for future development in democratized content moderation. Extending beyond accuracy assessments to encompass a broader range of labels, such as content valence or relevance to specific communities, could enhance the tool's utility. Further research is necessary to explore trust dynamics within networks of assessors and the potential for algorithmic prediction models to scale trusted assessments. Additionally, addressing the use case limitations for mobile platforms remains a critical area for future innovation.

Conclusion

The Trustnet browser extension's approach to in-place, democratized content moderation highlights the potential for empowering web users in the fight against misinformation. By enabling users to assess content and rely on assessments from trusted sources directly within their browsing experience, this tool represents a significant step toward a more autonomous, inclusive, and diverse ecosystem for content credibility assessment on the web.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (107)
  1. 2012. AllSides — Balanced news via media bias ratings for an unbiased news perspective. https://www.allsides.com/unbiased-balanced-news
  2. 2012. Annotate the web, with anyone, anywhere. https://web.hypothes.is/
  3. 2015. Media Bias / Fact Check. https://mediabiasfactcheck.com/
  4. 2018. Ad Fontes Media. https://adfontesmedia.com/
  5. 2018. Facebook apologises for blocking Prager University’s videos. Retrieved July 9, 2022 from https://www.bbc.com/news/technology-45247302
  6. 2018. Shinigami Eyes. https://shinigami-eyes.github.io/
  7. 2021. How Facebook’s third-party fact-checking program works. Retrieved August 25, 2022 from https://www.facebook.com/journalismproject/programs/third-party-fact-checking/how-it-works
  8. 2022. Diversity of Perspectives — Community Notes. https://communitynotes.twitter.com/guide/en/contributing/diversity-of-perspectives.html
  9. Birds of a feather don’t fact-check each other: Partisanship and the evaluation of news in Twitter’s Birdwatch crowdsourced fact-checking program. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
  10. Mike Ananny. 2018. The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation. (2018).
  11. Marc-André Argentino. 2021. QAnon and the storm of the US Capitol: The offline effect of online conspiracy theories. The Conversation 7 (2021).
  12. Charles Arthur. 2009. Google Sidewiki: the idea that won’t die, but never lives. Retrieved July 6, 2022 from https://www.theguardian.com/technology/blog/2009/sep/24/google-sidewiki-commenting
  13. Automatic fact-checking using context and discourse information. Journal of Data and Information Quality (JDIQ) 11, 3 (2019), 1–27.
  14. What is the Will of the People? Moderation Preferences for Misinformation. arXiv preprint arXiv:2202.00799 (2022).
  15. Combining interventions to reduce the spread of viral misinformation. Nature Human Behaviour 6, 10 (2022), 1372–1380.
  16. Shashank Bengali. 2019. How WhatsApp is battling misinformation in India, where “fake news is part of our culture.”. Los Angeles Times. https://www. latimes. com/world/la-fg-india-whatsapp-2019-story. html (2019).
  17. Nudgecred: supporting news credibility assessment on social media through nudges. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–30.
  18. Monika Bickert. 2019. Combatting Vaccine Misinformation - About Facebook. Retrieved August 25, 2022 from https://about.fb.com/news/2019/03/combatting-vaccine-misinformation/
  19. Incidental news: How young people consume news on social media. (2017).
  20. Adrian MP Braşoveanu and Răzvan Andonie. 2021. Integrating machine learning techniques in semantic fake news detection. Neural Processing Letters 53, 5 (2021), 3055–3072.
  21. Types, sources, and claims of COVID-19 misinformation. Ph. D. Dissertation. University of Oxford.
  22. Amy S Bruckman. 2022. Should you believe Wikipedia?: online communities and the construction of knowledge. Cambridge University Press.
  23. Stop clickbait: Detecting and preventing clickbaits in online news media. In 2016 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, 9–16.
  24. Examining factors associated with twitter account suspension following the 2020 us presidential election. In Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. 607–612.
  25. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political behavior 42 (2020), 1073–1095.
  26. Alexander Cobleigh. 2020. TrustNet: Trust-based Moderation Using Distributed Chat Systems for Transitive Trust Propagation. (2020).
  27. Josh Constine. 2017. Facebook puts link to 10 tips for spotting ‘false news’ atop feed. Retrieved August 25, 2022 from https://techcrunch.com/2017/04/06/facebook-puts-link-to-10-tips-for-spotting-false-news-atop-feed
  28. Nicholas Diakopoulos and Irfan Essa. 2008. An annotation model for making sense of information quality in online video. In Proceedings of the 3rd International Conference on the Pragmatic Web: Innovating the Interactive Society. 31–34.
  29. James Price Dillard and Lijiang Shen. 2005. On the nature of reactance and its role in persuasive health communication. Communication monographs 72, 2 (2005), 144–168.
  30. Pranav Dixit and Ryan Mac. 2018. How WhatsApp destroyed a village. Buzzfeed News (2018).
  31. Highlighting disputed claims on the web. In Proceedings of the 19th international conference on World wide web. 341–350.
  32. Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online. Harvard Kennedy School Misinformation Review (2021).
  33. Distributed sensemaking: improving sensemaking by leveraging the efforts of previous users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 247–256.
  34. Geographic and Temporal Trends in Fake News Consumption During the 2016 US Presidential Election.. In CIKM, Vol. 17. 6–10.
  35. Social clicks: What and who gets read on Twitter?. In Proceedings of the 2016 ACM SIGMETRICS int’l conf. on measurement and modeling of computer science. 179–192.
  36. To label or not to label: The effect of stance and credibility labels on readers’ selection and perception of news articles. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 1–16.
  37. Parham Ghobadi. 2022. Instagram moderators say Iran offered them bribes to remove accounts. Retrieved November 16, 2022 from https://www.bbc.com/news/world-middle-east-61516126
  38. Tarleton Gillespie. 2022. Do not recommend? Reduction as a form of content moderation. Social Media+ Society 8, 3 (2022), 20563051221117552.
  39. Nitesh Goyal and Susan R Fussell. 2016. Effects of sensemaking translucence on distributed collaborative analysis. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. 288–302.
  40. Effects of Visualization and Note-taking on Sensemaking and Analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2721–2724.
  41. Sofia Grafanaki. 2018. Platforms, the First Amendment and Online Speech Regulating the Filters. Pace L. Rev. 39 (2018), 111.
  42. Get back! you don’t know me like that: The social mediation of fact checking interventions in twitter conversations. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 8. 187–196.
  43. Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. 1803–1812.
  44. Hendrik Heuer and Elena Leah Glassman. 2022. A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–21.
  45. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHI conference on Human factors in computing systems. 17–24.
  46. Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature reviews neuroscience 2, 3 (2001), 194–203.
  47. Amanda Silberling Ivan Mehta. 2022. Twitter bans posting of handles and links to Facebook, Instagram, Mastodon and more (Updated). Retrieved March 27, 2023 from https://techcrunch.com/2022/12/18/twitter-wont-let-you-post-your-facebook-instagram-and-mastodon-handles
  48. Exploring the Use of Personalized AI for Identifying Misinformation on Social Media. To appear in the Proceedings of the ACM on Human-Computer Interaction CHI3 (2023).
  49. Exploring lightweight interventions at posting time to reduce the sharing of misinformation on social media. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–42.
  50. Our Browser Extension Lets Readers Change the Headlines on News Articles, and You Won’t Believe What They Did! Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–33.
  51. Leveraging Structured Trusted-Peer Assessments to Combat Misinformation. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–40.
  52. Tracy Jan and Elizabeth Dwoskin. 2017. A white man called her kids the n-word. Facebook stopped her from sharing it. Retrieved March 29, 2023 from https://www.washingtonpost.com/business/economy/for-facebook-erasing-hate-speech-proves-a-daunting-challenge/2017/07/31/922d9bc6-6e3b-11e7-9c15-177740635e83_story.html
  53. Designing Word Filter Tools for Creator-led Comment Moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–21.
  54. Embedding democratic values into social media AIs via societal objective functions. arXiv preprint arXiv:2307.13912 (2023).
  55. Webwatcher: A tour guide for the world wide web. In IJCAI (1). Citeseer, 770–777.
  56. Webwatcher: Machine learning and hypertext. Technical Report. CARNEGIE-MELLON UNIV PITTSBURGH PA SCHOOL OF COMPUTER SCIENCE.
  57. Analysing topologies of transitive trust. In Proceedings of the First International Workshop on Formal Aspects in Security & Trust (FAST2003). Citeseer, 9–22.
  58. Jeremiah H Kalir. 2019. Open web annotation as collaborative learning. First Monday (2019).
  59. Standing on the schemas of giants: socially augmented information foraging. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 999–1010.
  60. András Koltay. 2021. The Protection of Freedom of Expression from Social Media Platforms. Mercer L. Rev. 73 (2021), 523.
  61. Integrating on-demand fact-checking with public dialogue. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 1188–1199.
  62. Fuse: In-Situ Sensemaking Support in the Browser. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–15.
  63. J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics (1977), 159–174.
  64. Issie Lapowski. 2018. Newsguard wants to fight fake news with humans, not algorithms. Wired, August 23 (2018).
  65. Crowdtrace: Visualizing provenance in distributed sensemaking. In 2020 IEEE Visualization Conference (VIS). IEEE, 191–195.
  66. Social media and credibility indicators: The effect of influence cues. Computers in human behavior 63 (2016), 264–271.
  67. Trust transitivity in complex social networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 25. 1222–1229.
  68. Crowdlines: Supporting synthesis of diverse information sources through crowdsourced outlines. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 3. 110–119.
  69. Pranav Malhotra. 2020. ¡? covid19?¿ A Relationship-Centered and Culturally Informed Approach to Studying Misinformation on COVID-19. Social Media+ Society 6, 3 (2020), 2056305120948224.
  70. Political fact-checking on Twitter: When do corrections have an effect? Political Communication 35, 2 (2018), 196–219.
  71. James C McCroskey and Jason J Teven. 1999. Goodwill: A reexamination of the construct and its measurement. Communications Monographs 66, 1 (1999), 90–103.
  72. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on human-computer interaction 3, CSCW (2019), 1–23.
  73. Whatsapp monitor: A fact-checking system for whatsapp. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 676–677.
  74. Cognitive reflection correlates with behavior on Twitter. Nature communications 12, 1 (2021), 921.
  75. Adam Mosseri. 2016. News feed fyi: Addressing hoaxes and fake news. Facebook newsroom 15 (2016), 12.
  76. Michael J Muller and Sandra Kogan. 2010. Grounded theory method in HCI and CSCW. Cambridge: IBM Center for Social Software 28, 2 (2010), 1–46.
  77. Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society 20, 11 (2018), 4366–4383.
  78. Paul Ohm and Jonathan Frankle. 2018. Desirable inefficiency. Fla. L. Rev. 70 (2018), 777.
  79. The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management science 66, 11 (2020), 4944–4957.
  80. Prior exposure increases perceived accuracy of fake news. Journal of experimental psychology: general 147, 12 (2018), 1865.
  81. Shifting attention to accuracy can reduce misinformation online. Nature 592, 7855 (2021), 590–595.
  82. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological science 31, 7 (2020), 770–780.
  83. Gordon Pennycook and David G Rand. 2019. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188 (2019), 39–50.
  84. Sarah Perez. 2019. Facebook News Feed changes downrank misleading health info and dangerous ‘cures’. Retrieved August 25, 2022 from https://techcrunch.com/2019/07/02/facebook-news-feed-changes-downrank-misleading-health-info-and-dangerous-cures/
  85. Hierarchical multi-modal contextual attention network for fake news detection. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 153–162.
  86. Emilee Rader and Rebecca Gray. 2015. Understanding user beliefs about algorithmic curation in the Facebook news feed. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 173–182.
  87. Ronald A Rensink. 2005. Change blindness. In Neurobiology of attention. Elsevier, 76–81.
  88. BaitBuster: a clickbait identification framework. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
  89. Encounters with visual misinformation and labels across platforms: An interview and diary study to inform ecosystem approaches to misinformation interventions. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
  90. Aaron Sankin. 2017. How activists of color lose battles against Facebook’s moderator army. Retrieved March 29, 2023 from https://revealnews.org/article/how-activists-of-color-lose-battles-against-facebooks-moderator-army/
  91. Zoe Schiffer and Casey Newton. 2023. Yes, Elon Musk created a special system for showing you all his tweets first. Retrieved March 27, 2023 from https://www.theverge.com/2023/2/14/23600358/elon-musk-tweets-algorithm-changes-twitter
  92. Trust it or not: Effects of machine-learning warnings in helping individuals mitigate misinformation. In Proceedings of the 10th ACM Conference on Web Science. 265–274.
  93. Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 boston marathon bombing. IConference 2014 proceedings (2014).
  94. Matthias Steup and R Neta. 2005. Stanford Encyclopedia of Philosophy. Epistemology.
  95. Anselm Strauss and Juliet Corbin. 1998. Basics of qualitative research techniques. (1998).
  96. Naomi Thomas. 2022. Doctors worry that online misinformation will push abortion-seekers toward ineffective, dangerous methods. CNN (2022).
  97. Accost, Accede, or Amplify: Attitudes towards COVID-19 Misinformation on WhatsApp in India. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–17.
  98. GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–30.
  99. The spread of true and false news online. science 359, 6380 (2018), 1146–1151.
  100. Systematic literature review on the spread of health-related misinformation on social media. Social science & medicine 240 (2019), 112552.
  101. Effects of credibility indicators on social media news sharing intent. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–14.
  102. The web of false information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of Data and Information Quality (JDIQ) 11, 3 (2019), 1–37.
  103. Automated fact-checking: A survey. Language and Linguistics Compass 15, 10 (2021), e12438.
  104. Opportunities and challenges around a tool for social and public web activity tracking. In Proceedings of the 19th ACM Conf. on Computer-Supported Cooperative Work & Social Computing. 913–925.
  105. A structured response to misinformation: Defining and annotating credibility indicators in news articles. In Companion Proceedings of the The Web Conference 2018. 603–612.
  106. Wikum: Bridging discussion forums and wikis using recursive summarization. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2082–2096.
  107. Successful classroom deployment of a social document annotation system. In Proceedings of the sigchi conf. on human factors in computing systems. 1883–1892.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Farnaz Jahanbakhsh (4 papers)
  2. David R. Karger (11 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com