Large Language Model Agent for Fake News Detection (2405.01593v1)
Abstract: In the current digital era, the rapid spread of misinformation on online platforms presents significant challenges to societal well-being, public trust, and democratic processes, influencing critical decision making and public opinion. To address these challenges, there is a growing need for automated fake news detection mechanisms. Pre-trained LLMs have demonstrated exceptional capabilities across various NLP tasks, prompting exploration into their potential for verifying news claims. Instead of employing LLMs in a non-agentic way, where LLMs generate responses based on direct prompts in a single shot, our work introduces FactAgent, an agentic approach of utilizing LLMs for fake news detection. FactAgent enables LLMs to emulate human expert behavior in verifying news claims without any model training, following a structured workflow. This workflow breaks down the complex task of news veracity checking into multiple sub-steps, where LLMs complete simple tasks using their internal knowledge or external tools. At the final step of the workflow, LLMs integrate all findings throughout the workflow to determine the news claim's veracity. Compared to manual human verification, FactAgent offers enhanced efficiency. Experimental studies demonstrate the effectiveness of FactAgent in verifying claims without the need for any training process. Moreover, FactAgent provides transparent explanations at each step of the workflow and during final decision-making, offering insights into the reasoning process of fake news detection for end users. FactAgent is highly adaptable, allowing for straightforward updates to its tools that LLMs can leverage within the workflow, as well as updates to the workflow itself using domain knowledge. This adaptability enables FactAgent's application to news verification across various domains.
- Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of economic perspectives 31, 2 (2017), 211–236.
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
- Openagi: When llm meets domain experts. Advances in Neural Information Processing Systems 36 (2024).
- Lucas Graves and Michelle Amazeen. 2019. Fact-checking as idea and practice in journalism. (2019).
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
- Y Kim. 2014. Convolutional neural networks for sentence classification. InProceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2014), 1746–1751.
- Helma: A large-scale hallucination evaluation benchmark for large language models. arXiv preprint arXiv:2305.11747 (2023).
- Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118 (2023).
- MUSER: A MUlti-Step Evidence Retrieval Enhancement Framework for Fake News Detection. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4461–4472.
- Salman Bin Naeem and Rubina Bhatti. 2020. The Covid-19 ‘infodemic’: a new front for information professionals. Health Information & Libraries Journal 37, 3 (2020), 233–239.
- Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–22.
- Credibility assessment of textual claims on the web. In Proceedings of the 25th ACM international on conference on information and knowledge management. 2173–2178.
- Where the truth lies: Explaining the credibility of emerging claims on the web and social media. In Proceedings of the 26th International Conference on World Wide Web Companion. 1003–1012.
- Declare: Debunking fake news and false claims using evidence-aware deep learning. arXiv preprint arXiv:1809.06416 (2018).
- Piotr Przybyla. 2020. Capturing the style of fake news. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 490–497.
- Is chatgpt a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476 (2023).
- Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 conference on empirical methods in natural language processing. 2931–2937.
- Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. Big data 8, 3 (2020), 171–188.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems 35 (2022), 24824–24837.
- Evidence-aware fake news detection with graph neural networks. In Proceedings of the ACM web conference 2022. 2501–2510.
- Xuan Zhang and Wei Gao. 2023. Towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method. arXiv preprint arXiv:2310.00305 (2023).
- Fake news: Fundamental theories, detection strategies and challenges. In Proceedings of the twelfth ACM international conference on web search and data mining. 836–837.
- Detection and resolution of rumours in social media: A survey. Acm Computing Surveys (Csur) 51, 2 (2018), 1–36.
- Xinyi Li (97 papers)
- Yongfeng Zhang (163 papers)
- Edward C. Malthouse (5 papers)