Uncertainty Resolution in Misinformation Detection (2401.01197v1)
Abstract: Misinformation poses a variety of risks, such as undermining public trust and distorting factual discourse. LLMs like GPT-4 have been shown effective in mitigating misinformation, particularly in handling statements where enough context is provided. However, they struggle to assess ambiguous or context-deficient statements accurately. This work introduces a new method to resolve uncertainty in such statements. We propose a framework to categorize missing information and publish category labels for the LIAR-New dataset, which is adaptable to cross-domain content with missing information. We then leverage this framework to generate effective user queries for missing context. Compared to baselines, our method improves the rate at which generated questions are answerable by the user by 38 percentage points and classification performance by over 10 percentage points macro F1. Thus, this approach may provide a valuable component for future misinformation mitigation pipelines.
- Convai3: Generating clarifying questions for open-domain dialogue systems (clariq).
- Alexandre Bovet and Hernán A Makse. 2019. Influence of fake news in twitter during the 2016 us presidential election. Nature communications, 10(1):7.
- Is explanation the cure? misinformation mitigation in the short term and long term.
- Clam: Selective clarification for ambiguous questions with generative language models.
- Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466.
- COVID-19 claim radar: A structured claim extraction and tracking system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 135–144, Dublin, Ireland. Association for Computational Linguistics.
- Priyanka Meel and Dinesh Kumar Vishwakarma. 2020. Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. Expert Systems with Applications, 153:112986.
- Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review.
- Towards reliable misinformation mitigation: Generalization, uncertainty, and gpt-4.
- Zero-shot clarifying question generation for conversational search.
- Chain-of-thought prompting elicits reasoning in large language models.
- Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–20.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.