Does the Source of a Warning Matter? Examining the Effectiveness of Veracity Warning Labels Across Warners (2407.21592v1)
Abstract: In this study, we conducted an online, between-subjects experiment (N = 2,049) to better understand the impact of warning label sources on information trust and sharing intentions. Across four warners (the social media platform, other social media users, AI, and fact checkers), we found that all significantly decreased trust in false information relative to control, but warnings from AI were modestly more effective. All warners significantly decreased the sharing intentions of false information, except warnings from other social media users. AI was again the most effective. These results were moderated by prior trust in media and the information itself. Most noteworthy, we found that warning labels from AI were significantly more effective than all other warning labels for participants who reported a low trust in news organizations, while warnings from AI were no more effective than any other warning label for participants who reported a high trust in news organizations.
- Adalı, S. 2013. Modeling trust context in networks. Springer Briefs.
- Scaling up fact-checking using the wisdom of crowds. Science advances, 7(36): eabf4393.
- Why do so few people share fake news? It hurts their reputation. new media & society, 24(6): 1303–1324.
- Anspach, N. M. 2017. The new personal influence: How our Facebook friends influence the news we read. Political communication, 34(4): 590–606.
- Humans vs. AI: The Role of Trust, Political Attitudes, and Individual Characteristics on Perceptions About Automated Decision Making Across Europe. International Journal of Communication, 17: 28.
- Source alerts can reduce the harms of foreign disinformation. Harvard Kennedy School Misinformation Review.
- Berinsky, A. J. 2017. Rumors and health care reform: Experiments in political misinformation. British journal of political science, 47(2): 241–262.
- Toward a better performance evaluation framework for fake news classification. In Proceedings of the international AAAI conference on web and social media, volume 14, 60–71.
- Judging truth. Annual review of psychology, 71: 499–515.
- Timing matters when correcting fake news. Proceedings of the National Academy of Sciences, 118(5): e2020043118.
- Durably reducing transphobia: A field experiment on door-to-door canvassing. Science, 352(6282): 220–224.
- Spreading disinformation on facebook: Do trust in message source, risk propensity, or personality affect the organic reach of “fake news”? Social media+ society, 5(4): 2056305119888654.
- Quarantined! Examining the effects of a community-wide moderation intervention on Reddit. ACM Transactions on Computer-Human Interaction (TOCHI), 29(4): 1–26.
- Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political behavior, 42: 1073–1095.
- Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1): 114.
- Democrats are better than Republicans at discerning true and false news but do not have better metacognitive awareness. Communications Psychology, 1(1): 46.
- The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1): 13–29.
- Do explanations increase the effectiveness of AI-crowd generated fake news warnings? In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, 183–193.
- Frederick, S. 2005. Cognitive reflection and decision making. Journal of Economic perspectives, 19(4): 25–42.
- The effect of collective attention on controversial debates on social media. In Proceedings of the 2017 ACM on Web Science Conference, 43–52.
- The effects of source cues on online news perception. Computers in Human Behavior, 38: 358–367.
- Best Practices for Ethical Conduct of Misinformation Research. European Psychologist.
- Source credibility, expertise, and trust in health and risk messaging. In Oxford Research Encyclopedia of Communication.
- Tailoring heuristics and timing AI interventions for supporting news veracity assessments. Computers in Human Behavior Reports, 2: 100043.
- Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone? In Proceedings of the International AAAI Conference on Web and Social Media, volume 13, 247–256.
- Ethical and safety considerations in automated fake news detection. Behaviour & Information Technology, 1–22.
- Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2): 1–27.
- Kahneman, D. 2011. Thinking, fast and slow. macmillan.
- Says who? The effects of presentation format and source rating on fake news in social media. Mis quarterly, 43(3): 1025–1039.
- Combating fake news on social media with source ratings: The effects of user and expert reputation ratings. Journal of Management Information Systems, 36(3): 931–968.
- The COVID States Project# 18: Fake News on Twitter. Available at: https://osf.io/vzb9t.
- Twitter’s disputed tags may be ineffective at reducing belief in fake news and only reduce intentions to share fake news among Democrats and Independents. Journal of Online Trust and Safety, 1(3).
- Misinformation and its correction: Continued influence and successful debiasing. Psychological science in the public interest, 13(3): 106–131.
- Reducing misinformation sharing at scale using digital accuracy prompt ads.
- ” Learn the Facts About COVID-19”: Analyzing the Use of Warning Labels on TikTok Videos. In Proceedings of the International AAAI Conference on Web and Social Media, volume 17, 554–565.
- The Effects of AI-based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2): 1–27.
- Fact-checker warning labels are effective even for those who distrust fact-checkers.
- Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Current Opinion in Psychology, 101710.
- An integrative model of organizational trust. Academy of management review, 20(3): 709–734.
- Mena, P. 2020. Cleaning up social media: The effect of warning labels on likelihood of sharing false news on Facebook. Policy & internet, 12(2): 165–183.
- The emerging science of content labeling: Contextualizing social media content moderation. Journal of the Association for Information Science and Technology, 73(10): 1365–1386.
- Mosseri, A. 2016. Building a better news feed for you. Facebook Newsroom, 29: 2016.
- How topic novelty impacts the effectiveness of news veracity interventions. Communications of the ACM, 65(2): 68–75.
- NIH. 2022. NIH stage model for behavioral intervention development. https://www.nia.nih.gov/research/dbsr/nih-stage-model-behavioral-intervention-development.
- How do online users respond to crowdsourced fact-checking? Humanities and Social Sciences Communications, 10(1): 1–11.
- The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11): 4944–4957.
- Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855): 590–595.
- Crowdsourcing Judgments of News Source Quality. Available at: shorturl.at/MNQ27.
- The psychology of fake news. Trends in cognitive sciences, 25(5): 388–402.
- The COVID States Project# 97: Twitter, Social Media, and Elon Musk.
- Trust it or not: Effects of machine-learning warnings in helping individuals mitigate misinformation. In Proceedings of the 10th ACM Conference on Web Science, 265–274.
- Selective attention in the news feed: An eye-tracking study on the perception and selection of political news posts on Facebook. new media & society, 21(1): 168–190.
- Searching for the backfire effect: Measurement and design considerations. Journal of applied research in memory and cognition, 9(3): 286–299.
- Investigating an alternate form of the cognitive reflection test. Judgment and Decision making, 11(1): 99–113.
- What is BitChute? Characterizing the ”Free Speech” Alternative to YouTube. In Proceedings of the 31st ACM Conference on Hypertext and Social Media, HT ’20, 139–140. New York, NY, USA: Association for Computing Machinery. ISBN 9781450370981.
- Republicans far more likely than Democrats to say fact-checkers tend to favor one side.
- Collective memory: Conceptual foundations and theoretical approaches. Memory, 16(3): 318–326.
- A question of credibility–Effects of source cues and recommendations on information selection on news sites and blogs. Communications, 39(4): 435–456.
- Effects of credibility indicators on social media news sharing intent. In Proceedings of the 2020 chi conference on human factors in computing systems, 1–14.
- Zannettou, S. 2021. “I Won the Election!”: An Empirical Analysis of Soft Moderation Interventions on Twitter. In Proceedings of the International AAAI Conference on Web and Social Media, volume 15, 865–876.
- Benjamin D. Horne (28 papers)