Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Corrective or Backfire: Characterizing and Predicting User Response to Social Correction (2403.04852v1)

Published 7 Mar 2024 in cs.SI

Abstract: Online misinformation poses a global risk with harmful implications for society. Ordinary social media users are known to actively reply to misinformation posts with counter-misinformation messages, which is shown to be effective in containing the spread of misinformation. Such a practice is defined as "social correction". Nevertheless, it remains unknown how users respond to social correction in real-world scenarios, especially, will it have a corrective or backfire effect on users. Investigating this research question is pivotal for developing and refining strategies that maximize the efficacy of social correction initiatives. To fill this gap, we conduct an in-depth study to characterize and predict the user response to social correction in a data-driven manner through the lens of X (Formerly Twitter), where the user response is instantiated as the reply that is written toward a counter-misinformation message. Particularly, we first create a novel dataset with 55, 549 triples of misinformation tweets, counter-misinformation replies, and responses to counter-misinformation replies, and then curate a taxonomy to illustrate different kinds of user responses. Next, fine-grained statistical analysis of reply linguistic and engagement features as well as repliers' user attributes is conducted to illustrate the characteristics that are significant in determining whether a reply will have a corrective or backfire effect. Finally, we build a user response prediction model to identify whether a social correction will be corrective, neutral, or have a backfire effect, which achieves a promising F1 score of 0.816. Our work enables stakeholders to monitor and predict user responses effectively, thus guiding the use of social correction to maximize their corrective impact and minimize backfire effects. The code and data is accessible on https://github.com/claws-lab/response-to-social-correction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (79)
  1. Jennifer Abbasi. 2022. Widespread misinformation about infertility continues to create COVID-19 vaccine hesitancy. JAMA 327, 11 (2022), 1013–1015.
  2. Scaling up fact-checking using the wisdom of crowds. Science Advances 7 (9 2021). Issue 36. https://doi.org/10.1126/sciadv.abf4393
  3. Acting the Part: Examining Information Operations Within #BlackLivesMatter Discourse. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 20 (Nov. 2018), 27 pages. https://doi.org/10.1145/3274289
  4. Philip Ball and Amy Maxmen. 2020. The epic battle against coronavirus misinformation and conspiracy theories. https://www.nature.com/articles/d41586-020-01452-z.
  5. Reading emotional words within sentences: the impact of arousal and valence on event-related potentials. International Journal of Psychophysiology 78, 3 (2010), 299–307.
  6. Leticia Bode and Emily K. Vraga. 2015. In Related News, That was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media. Journal of Communication 65, 4 (06 2015), 619–638. https://doi.org/10.1111/jcom.12166 arXiv:https://academic.oup.com/joc/article-pdf/65/4/619/22320531/jjnlcom0619.pdf
  7. Leticia Bode and Emily K. Vraga. 2018. See Something, Say Something: Correction of Global Health Misinformation on Social Media. Health Communication 33 (9 2018), 1131–1140. Issue 9. https://doi.org/10.1080/10410236.2017.1331312
  8. Do the right thing: Tone may not affect correction of misinformation on social media. Harvard Kennedy School Misinformation Review (2020).
  9. ‘It infuriates me’: examining young adults’ reactions to and recommendations to fight misinformation about COVID-19. Journal of Youth Studies (8 2021), 1–21. https://doi.org/10.1080/13676261.2021.1965108
  10. Tweet, tweet, retweet: Conversational aspects of retweeting on twitter. In 2010 43rd Hawaii international conference on system sciences. IEEE, 1–10.
  11. Reading between the lies: A classification scheme of types of reply to misinformation in public discussion threads. In Proceedings of the 2022 Conference on Human Information Interaction and Retrieval. 243–253.
  12. Limiting the spread of misinformation in social networks. Proceedings of the 20th international conference on World wide web - WWW ’11, 665. https://doi.org/10.1145/1963405.1963499
  13. Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological science 28, 11 (2017), 1531–1546.
  14. The Roll-Out of Community Notes Did Not Reduce Engagement With Misinformation on Twitter. arXiv preprint arXiv:2307.07960 (2023).
  15. Jonas Colliander. 2019. “This is fake news”: Investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media. Computers in Human Behavior 97 (2019), 202–215.
  16. A Computational Approach to Politeness with Application to Social Factors. In 51st Annual Meeting of the Association for Computational Linguistics. ACL, 250–259.
  17. Echo chambers in the age of misinformation. arXiv preprint arXiv:1509.00189 (2015).
  18. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171–4186.
  19. The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology 1, 1 (2022), 13–29.
  20. Rumor cascades. In proceedings of the international AAAI conference on web and social media, Vol. 8. 101–110.
  21. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056 (2023).
  22. Mahak Goindani and Jennifer Neville. 2020. Social Reinforcement Learning to Combat Fake News Spread, Ryan P Adams and Vibhav Gogate (Eds.). Proceedings of The 35th Uncertainty in Artificial Intelligence Conference 115, 1006–1016. https://proceedings.mlr.press/v115/goindani20a.html
  23. ANTi-Vax: a novel Twitter dataset for COVID-19 vaccine misinformation detection. Public health 203 (2022), 23–30.
  24. Petgen: Personalized text generation attack on deep sequence embedding-based classification models. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 575–584.
  25. Reinforcement learning-based counter-misinformation response generation: a case study of COVID-19 vaccine misinformation. In Proceedings of the ACM Web Conference 2023. 2698–2709.
  26. A survey on the role of crowds in combating online misinformation: Annotators, evaluators, and creators. arXiv preprint arXiv:2310.02095 (2023).
  27. Racism is a virus: Anti-Asian hate and counterspeech in social media during the COVID-19 crisis. In Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. 90–94.
  28. COVIDLies: Detecting COVID-19 misinformation on social media. (2020).
  29. Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In ICWSM, Vol. 8.
  30. Modeling and measuring expressed (dis) belief in (mis) information. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 14. 315–326.
  31. Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. 324–332.
  32. Jan Kirchner and Christian Reuter. 2020. Countering Fake News: A Comparison of Possible Solutions Regarding User Acceptance and Effectiveness. Proceedings of the ACM on Human-Computer Interaction 4 (10 2020), 1–27. Issue CSCW2. https://doi.org/10.1145/3415211
  33. The science of fake news. Science 359, 6380 (2018), 1094–1096.
  34. Misinformation and its correction: Continued influence and successful debiasing. Psychological science in the public interest 13, 3 (2012), 106–131.
  35. Efficient and timely misinformation blocking under varying cost constraints. Online Social Networks and Media 2 (8 2017), 19–31. https://doi.org/10.1016/j.osnem.2017.07.001
  36. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
  37. Characterizing and Predicting Social Correction on Twitter. In Proceedings of the 15th ACM Web Science Conference 2023. 86–95.
  38. GPTEval: A survey on assessments of ChatGPT and GPT-4. arXiv preprint arXiv:2308.12488 (2023).
  39. The role of the crowd in countering misinformation: A case study of the COVID-19 infodemic. In 2020 IEEE international Conference on big data (big data). IEEE, 748–757.
  40. Cross-Platform Multimodal Misinformation: Taxonomy, Characteristics and Detection for Textual Posts and Videos. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 651–662.
  41. ” This is Fake News”: Characterizing the Spontaneous Debunking from Twitter Users to COVID-19 False Information. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 17. 650–661.
  42. Tweeting is believing? Understanding microblog credibility perceptions. In Proceedings of the ACM 2012 conference on computer supported cooperative work. 441–450.
  43. Brendan Nyhan and Jason Reifler. 2010. When corrections fail: The persistence of political misperceptions. Political Behavior 32, 2 (2010), 303–330.
  44. Linguistic inquiry and word count: LIWC 2001. Mahway: Lawrence Erlbaum Associates 71, 2001 (2001), 2001.
  45. Christina Peter and Thomas Koch. 2016. When debunking scientific myths fails (and when it does not) The backfire effect in the context of journalistic coverage and immediate judgments as prevention strategy. Science Communication 38, 1 (2016), 3–25.
  46. Online misinformation is linked to early COVID-19 vaccination hesitancy and refusal. Scientific reports 12, 1 (2022), 1–7.
  47. Ethan Porter and Thomas J Wood. 2021. The global effectiveness of fact-checking: Evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proceedings of the National Academy of Sciences 118, 37 (2021), e2104235118.
  48. Nicolas Pröllochs. 2022. Community-based fact-checking on Twitter’s Birdwatch platform. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 794–805.
  49. Philipp Schmid and Cornelia Betsch. 2022. Benefits and pitfalls of debunking interventions to counter mRNA vaccination misinformation during the COVID-19 pandemic. Science Communication 44, 5 (2022), 531–558.
  50. If You Have a Reliable Source, Say Something: Effects of Correction Comments on COVID-19 Misinformation. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 896–907.
  51. Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 boston marathon bombing. IConference 2014 proceedings (2014).
  52. Searching for the backfire effect: Measurement and design considerations. Journal of applied research in memory and cognition 9, 3 (2020), 286–299.
  53. The backfire effect after correcting misinformation is strongly associated with reliability. Journal of Experimental Psychology: General 151, 7 (2022), 1655.
  54. Online misinformation about climate change. Wiley Interdisciplinary Reviews: Climate Change 11, 5 (2020), e665.
  55. Mobilizing Users: Does Exposure to Misinformation and Its Correction Affect Users’ Responses to a Health Misinformation Post? Social Media + Society 6 (10 2020), 205630512097837. Issue 4. https://doi.org/10.1177/2056305120978377
  56. Jeyasushma Veeriah. 2021. YOUNG ADULTS’ABILITY TO DETECT FAKE NEWS AND THEIR NEW MEDIA LITERACY LEVEL IN THE WAKE OF THE COVID-19 PANDEMIC. Journal of Content, Community and Communication 13 (2021), 372–383. Issue 7.
  57. Examining the impact of sharing COVID-19 misinformation online on mental health. Scientific Reports 12, 1 (2022), 1–9.
  58. Nguyen Vo and Kyumin Lee. 2018. The rise of guardians: Fact-checking url recommendation to combat fake news. In The 41st international ACM SIGIR conference on research & development in information retrieval. 275–284.
  59. Nguyen Vo and Kyumin Lee. 2019. Learning from fact-checkers: analysis and generation of fact-checking language. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 335–344.
  60. The spread of true and false news online. Science 359, 6380 (2018), 1146–1151.
  61. Assessing the relative merits of news literacy and corrections in responding to misinformation on Twitter. New Media & Society (2021), 1461444821998691.
  62. Emily K Vraga and Leticia Bode. 2018. I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society 21, 10 (2018), 1337–1353.
  63. Emily K Vraga and Leticia Bode. 2020. Correction as a solution for health misinformation on social media. American Journal of Public Health 110, Suppl 3 (2020), S278.
  64. Emily K Vraga and Leticia Bode. 2021. Addressing COVID-19 misinformation on social media preemptively and responsively. Emerging infectious diseases 27, 2 (2021), 396.
  65. The effects of a news literacy video and real-time corrections to video misinformation related to sunscreen and skin cancer. Health communication (2021), 1–9.
  66. Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. Health Communication 36, 13 (2021), 1776–1784.
  67. Nathan Walter and Sheila T Murphy. 2018. How to unring the bell: A meta-analytic approach to correction of misinformation. Communication Monographs 85, 3 (2018), 423–441.
  68. Bairong Wang and Jun Zhuang. 2018. Rumor response, debunking response, and decision makings of misinformed Twitter users during disasters. Natural Hazards 93 (2018), 1145–1162.
  69. Evaluating rumor debunking effectiveness during the COVID-19 pandemic crisis: utilizing user stance in comments on Sina Weibo. Frontiers in Public Health 9 (2021), 770111.
  70. Factors influencing fake news rebuttal acceptance during the COVID-19 pandemic and the moderating effect of cognitive ability. Computers in human behavior 130 (2022), 107174.
  71. Zhihong Wang and Yi Guo. 2020. Rumor events detection enhanced by encoding sentimental information into time series division and word representations. Neurocomputing 397 (7 2020), 224–243. https://doi.org/10.1016/j.neucom.2020.01.095
  72. Effect of Conformity on Perceived Trustworthiness of News in Social Media. IEEE Internet Computing 25 (1 2021), 12–19. Issue 1. https://doi.org/10.1109/MIC.2020.3032410
  73. Chloe Wittenberg and Adam J Berinsky. 2020. Misinformation and its correction. Social media and democracy: The state of the field, prospects for reform 163 (2020).
  74. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explorations Newsletter 21, 2 (2019), 80–90.
  75. Fine-tuned BERT Model for Multi-Label Tweets Classification.. In TREC. 1–7.
  76. Investigation of the determinants for misinformation correction effectiveness on social media during COVID-19 pandemic. Information Processing & Management 59, 3 (2022), 102935.
  77. “This is Fake! Shared it by Mistake”: Assessing the Intent of Fake News Spreaders. In Proceedings of the ACM Web Conference 2022. 3685–3694.
  78. Robust rumor blocking problem with uncertain rumor sources in social networks. World Wide Web 24 (1 2021), 229–247. Issue 1. https://doi.org/10.1007/s11280-020-00841-8
  79. Can Large Language Models Transform Computational Social Science? arXiv preprint arXiv:2305.03514 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bing He (82 papers)
  2. Yingchen Ma (2 papers)
  3. Mustaque Ahamad (13 papers)
  4. Srijan Kumar (61 papers)
Citations (2)
Github Logo Streamline Icon: https://streamlinehq.com