Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 176 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Uncertainty Resolution in Misinformation Detection (2401.01197v1)

Published 2 Jan 2024 in cs.CL and cs.AI

Abstract: Misinformation poses a variety of risks, such as undermining public trust and distorting factual discourse. LLMs like GPT-4 have been shown effective in mitigating misinformation, particularly in handling statements where enough context is provided. However, they struggle to assess ambiguous or context-deficient statements accurately. This work introduces a new method to resolve uncertainty in such statements. We propose a framework to categorize missing information and publish category labels for the LIAR-New dataset, which is adaptable to cross-domain content with missing information. We then leverage this framework to generate effective user queries for missing context. Compared to baselines, our method improves the rate at which generated questions are answerable by the user by 38 percentage points and classification performance by over 10 percentage points macro F1. Thus, this approach may provide a valuable component for future misinformation mitigation pipelines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Convai3: Generating clarifying questions for open-domain dialogue systems (clariq).
  2. Alexandre Bovet and Hernán A Makse. 2019. Influence of fake news in twitter during the 2016 us presidential election. Nature communications, 10(1):7.
  3. Is explanation the cure? misinformation mitigation in the short term and long term.
  4. Clam: Selective clarification for ambiguous questions with generative language models.
  5. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466.
  6. COVID-19 claim radar: A structured claim extraction and tracking system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 135–144, Dublin, Ireland. Association for Computational Linguistics.
  7. Priyanka Meel and Dinesh Kumar Vishwakarma. 2020. Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. Expert Systems with Applications, 153:112986.
  8. Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review.
  9. Towards reliable misinformation mitigation: Generalization, uncertainty, and gpt-4.
  10. Zero-shot clarifying question generation for conversational search.
  11. Chain-of-thought prompting elicits reasoning in large language models.
  12. Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–20.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.