Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effects of Automated Misinformation Warning Labels on the Intents to Like, Comment and Share Posts (2403.12916v1)

Published 19 Mar 2024 in cs.HC

Abstract: With fact-checking by professionals being difficult to scale on social media, algorithmic techniques have been considered. However, it is uncertain how the public may react to labels by automated fact-checkers. In this study, we investigate the use of automated warning labels derived from misinformation detection literature and investigate their effects on three forms of post engagement. Focusing on political posts, we also consider how partisanship affects engagement. In a two-phases within-subjects experiment with 200 participants, we found that the generic warnings suppressed intents to comment on and share posts, but not on the intent to like them. Furthermore, when different reasons for the labels were provided, their effects on post engagement were inconsistent, suggesting that the reasons could have undesirably motivated engagement instead. Partisanship effects were observed across the labels with higher engagement for politically congruent posts. We discuss the implications on the design and use of automated warning labels.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field. Retrieved Sep 1, 2023 from https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
  2. Wazib Ansar and Saptarsi Goswami. 2021. Combating the menace: A survey on characterization and detection of fake news from a data science perspective. International Journal of Information Management Data Insights 1, 2 (2021), 100052. https://doi.org/10.1016/j.jjimei.2021.100052
  3. Exploring users’ motivations to participate in viral communication on social media. Journal of Business Research 101 (2019), 574–582. https://doi.org/10.1016/j.jbusres.2018.11.011
  4. Pew Research Center. 2014. Political Polarization in the American Public. Retrieved Sep 1, 2023 from https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/
  5. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior 42, 4 (01 Dec 2020), 1073–1095. https://doi.org/10.1007/s11109-019-09533-0
  6. Diffusion of disinformation: How social media users respond to fake news and why. Journalism 21, 3 (2020), 381–398. https://doi.org/10.1177/1464884919868325 arXiv:https://doi.org/10.1177/1464884919868325
  7. Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media & Society 20, 8 (2018), 2745–2763. https://doi.org/10.1177/1461444817731756 arXiv:https://doi.org/10.1177/1461444817731756
  8. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 754–768. https://doi.org/10.1145/3472749.3474784
  9. To Label or Not to Label: The Effect of Stance and Credibility Labels on Readers’ Selection and Perception of News Articles. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 55 (nov 2018), 16 pages. https://doi.org/10.1145/3274324
  10. Fake News on Facebook and Twitter: Investigating How People (Don’t) Investigate. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376784
  11. False discovery rate control is a recommended alternative to Bonferroni-type adjustments in health studies. Journal of Clinical Epidemiology 67, 8 (2014), 850–857. https://doi.org/10.1016/j.jclinepi.2014.03.012
  12. Nevertheless, partisanship persisted: fake news warnings help briefly, but bias returns with time. Cognitive Research: Principles and Implications 6, 1 (23 Jul 2021), 52. https://doi.org/10.1186/s41235-021-00315-z
  13. Hendrik Heuer and Elena Leah Glassman. 2022. A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 241, 21 pages. https://doi.org/10.1145/3491102.3517717
  14. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Social Network Analysis and Mining 10, 1 (29 Sep 2020), 82. https://doi.org/10.1007/s13278-020-00696-x
  15. Kathleen Hall Jamieson and Joseph N. Cappella. 2010. Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. Oxford University Press.
  16. Jennifer Jerit and Yangzi Zhao. 2020. Political Misinformation. Annual Review of Political Science 23, 1 (2020), 77–94. https://doi.org/10.1146/annurev-polisci-050718-032814 arXiv:https://doi.org/10.1146/annurev-polisci-050718-032814
  17. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-Partisan Misinformation. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 371 (nov 2022), 27 pages. https://doi.org/10.1145/3555096
  18. Cheonsoo Kim and Sung-Un Yang. 2017. Like, comment, and share on Facebook: How each behavior differs from the other. Public Relations Review 43, 2 (2017), 441–449. https://doi.org/10.1016/j.pubrev.2017.02.006
  19. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Social Network Analysis and Mining 11, 1 (12 Mar 2021), 32. https://doi.org/10.1007/s13278-021-00739-x
  20. Twitter’s Disputed Tags May Be Ineffective at Reducing Belief in Fake News and Only Reduce Intentions to Share Fake News Among Democrats and Independents. Journal of Online Trust and Safety 1, 3 (Aug. 2022). https://doi.org/10.54501/jots.v1i3.39
  21. The Digital Transformation of News Media and the Rise of Disinformation and Fake News. Technical Report. Digital Economy Working Paper 2018-02, Joint Research Centre Technical Reports. https://ssrn.com/abstract=3164170
  22. Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. PLOS ONE 15, 2 (02 2020), 1–9. https://doi.org/10.1371/journal.pone.0228882
  23. Adam Mosseri. 2016. Addressing Hoaxes and Fake News. Retrieved Sep 1, 2023 from https://about.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/
  24. Rachel R. Mourão and Craig T. Robertson. 2019. Fake News as Discursive Integration: An Analysis of Sites That Publish False, Misleading, Hyperpartisan and Sensational Information. Journalism Studies 20, 14 (2019), 2077–2095. https://doi.org/10.1080/1461670X.2019.1566871 arXiv:https://doi.org/10.1080/1461670X.2019.1566871
  25. Will Moy. 2021. Scaling Up the Truth: Fact-Checking Innovations and the Pandemic. National Endowment For Democracy. https://www.ned.org/wp-content/uploads/2021/01/Fact-Checking-Innovations-Pandemic-Moy.pdf.
  26. A Comprehensive Review on Fake News Detection With Deep Learning. IEEE Access 9 (2021), 156151–156170. https://doi.org/10.1109/ACCESS.2021.3129329
  27. The Ineffectiveness of Fact-Checking Labels on News Memes and Articles. Mass Communication and Society 23, 5 (2020), 682–704. https://doi.org/10.1080/15205436.2020.1733613 arXiv:https://doi.org/10.1080/15205436.2020.1733613
  28. The Influence of Fact-Checking Is Disputed! The Role of Party Identification in Processing and Sharing Fact-Checked Social Media Posts. American Behavioral Scientist 0, 0 (2023), 00027642231174335. https://doi.org/10.1177/00027642231174335 arXiv:https://doi.org/10.1177/00027642231174335
  29. Hyperpartisan News Use: Relationships with Partisanship and Cognitive and Affective Involvement. Mass Communication and Society 24, 2 (2021), 210–232. https://doi.org/10.1080/15205436.2020.1844902 arXiv:https://doi.org/10.1080/15205436.2020.1844902
  30. Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences 118, 26 (2021), e2024292118. https://doi.org/10.1073/pnas.2024292118 arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2024292118
  31. Misinformation interventions are common, divisive, and poorly understood. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-81
  32. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA) (WebSci ’19). Association for Computing Machinery, New York, NY, USA, 265–274. https://doi.org/10.1145/3292522.3326012
  33. Laura Silver. 2022. Most across 19 countries see strong partisan conflicts in their society, especially in South Korea and the U.S. Pew Research Center. https://www.pewresearch.org/short-reads/2022/11/16/most-across-19-countries-see-strong-partisan-conflicts-in-their-society-especially-in-south-korea-and-the-u-s/.
  34. Fake News Reading on Social Media: An Eye-Tracking Study. In Proceedings of the 30th ACM Conference on Hypertext and Social Media (Hof, Germany) (HT ’19). Association for Computing Machinery, New York, NY, USA, 221–230. https://doi.org/10.1145/3342220.3343642
  35. Fake news zealots: Effect of perception of news on online sharing behavior. Frontiers in Psychology 13 (2022). https://doi.org/10.3389/fpsyg.2022.859534
  36. Joseph E. Uscinski and Ryden W. Butler. 2013. The Epistemology of Fact Checking. Critical Review 25, 2 (2013), 162–180. https://doi.org/10.1080/08913811.2013.843872 arXiv:https://doi.org/10.1080/08913811.2013.843872
  37. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. Association for Computing Machinery, New York, NY, USA, 143–146. https://doi.org/10.1145/1978942.1978963
  38. Misinformation in Social Media: Definition, Manipulation, and Detection. SIGKDD Explor. Newsl. 21, 2 (nov 2019), 80–90. https://doi.org/10.1145/3373464.3373475
  39. Xinyi Zhou and Reza Zafarani. 2020. A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities. ACM Comput. Surv. 53, 5, Article 109 (sep 2020), 40 pages. https://doi.org/10.1145/3395046
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Gionnieve Lim (11 papers)
  2. Simon T. Perrault (15 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets