Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Correcting misinformation on social media with a large language model (2403.11169v4)

Published 17 Mar 2024 in cs.CL and cs.AI

Abstract: Real-world misinformation, often multimodal, can be partially or fully factual but misleading using diverse tactics like conflating correlation with causation. Such misinformation is severely understudied, challenging to address, and harms various social domains, particularly on social media, where it can spread rapidly. High-quality and timely correction of misinformation that identifies and explains its (in)accuracies effectively reduces false beliefs. Despite the wide acceptance of manual correction, it is difficult to be timely and scalable. While LLMs have versatile capabilities that could accelerate misinformation correction, they struggle due to a lack of recent information, a tendency to produce false content, and limitations in addressing multimodal information. We propose MUSE, an LLM augmented with access to and credibility evaluation of up-to-date information. By retrieving evidence as refutations or supporting context, MUSE identifies and explains content (in)accuracies with references. It conducts multimodal retrieval and interprets visual content to verify and correct multimodal content. Given the absence of a comprehensive evaluation approach, we propose 13 dimensions of misinformation correction quality. Then, fact-checking experts evaluate responses to social media content that are not presupposed to be misinformation but broadly include (partially) incorrect and correct posts that may (not) be misleading. Results demonstrate MUSE's ability to write high-quality responses to potential misinformation--across modalities, tactics, domains, political leanings, and for information that has not previously been fact-checked online--within minutes of its appearance on social media. Overall, MUSE outperforms GPT-4 by 37% and even high-quality responses from laypeople by 29%. Our work provides a general methodological and evaluative framework to correct misinformation at scale.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xinyi Zhou (33 papers)
  2. Ashish Sharma (27 papers)
  3. Amy X. Zhang (58 papers)
  4. Tim Althoff (64 papers)
Citations (1)

Summary

Leveraging LLMs for Scalable Misinformation Correction on Social Media

Introduction

The proliferation of misinformation on social media platforms poses a significant challenge to public perception, often undermining trust in science and democracy. Traditional methods of misinformation correction involve expert and layperson intervention, which although effective, cannot scale to address the volume of misinformation generated daily. This limitation is exacerbated by the evolution of LLMs that, while facilitating misinformation creation, also bear potential for scalable misinformation correction. The paper introduces Muse, a novel approach utilizing an LLM augmented with access to and credibility evaluation of up-to-date information for multimodal misinformation correction on social media. Muse demonstrates superior performance in correcting misinformation compared to GPT-4 and high-quality corrections from laypeople.

Approach

Muse's design enables it to address not only textual but also visual misinformation through the integration of visuals, up-to-date factual, and credible web knowledge retrieval. The process begins with image descriptions utilizing image captioning models enhanced by celebrity recognition and Optical Character Recognition (OCR) for more informative interpretation of visual content. For textual misinformation, relevant web pages are retrieved using generated queries and filtered based on direct relevance and credibility evaluation. Muse generates corrections leveraging extracted evidence from web pages, ensuring accurate, trustworthy references and explanations.

Evaluation

An extensive evaluation involving experts in fact-checking and journalism assessed corrections generated by Muse across 13 dimensions. These included the factuality of explanation, relevance and credibility of references, and overall quality of corrections among others. Muse's corrections outperformed those generated by GPT-4 by 37% and laypeople's high-quality corrections by 29%, showcasing its capability to promptly correct misinformation after it appears on social media. Particularly, Muse excelled in identifying inaccuracies, generating relevant and factual text, and providing credible references.

Implications and Future Directions

The findings underscore the potential of Muse and similar technologies in combating misinformation on social media platforms both effectively and efficiently. The approach outlined not only addresses the scalability issue faced by manual corrections but also introduces a method to enhance the accuracy and trustworthiness of generated corrections. Future research could explore the integration of video inputs, application across multiple languages, and extend the evaluation to other platforms beyond X Community Notes. Moreover, further developments might aim to reduce the correction generation cost and time, already estimated at $0.5 per social media post, and examine the impact of correction immediacy on Muse's performance.

Conclusion

The development of Muse represents a significant advancement in the use of LLMs for misinformation correction on social media. By integrating capabilities for handling multimodal misinformation, accessing up-to-date information, and generating corrections with accurate references, Muse sets a new standard for automated misinformation correction technologies. Its superior performance, demonstrated through a comprehensive expert evaluation, highlights its potential as a scalable solution to the misinformation problem that plagues social media platforms.

X Twitter Logo Streamline Icon: https://streamlinehq.com