Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection (2402.07776v2)

Published 12 Feb 2024 in cs.CL
TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection

Abstract: The proliferation of fake news has emerged as a severe societal problem, raising significant interest from industry and academia. While existing deep-learning based methods have made progress in detecting fake news accurately, their reliability may be compromised caused by the non-transparent reasoning processes, poor generalization abilities and inherent risks of integration with LLMs. To address this challenge, we propose {\methodname}, a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models. This is achieved via a dual-system framework that integrates cognition and decision systems, adhering to the principles above. The cognition system harnesses human expertise to generate logical predicates, which guide LLMs in generating human-readable logic atoms. Meanwhile, the decision system deduces generalizable logic rules to aggregate these atoms, enabling the identification of the truthfulness of the input news across diverse domains and enhancing transparency in the decision-making process. Finally, we present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework. Our implementation is available at \url{https://github.com/less-and-less-bugs/Trust_TELLER}.

Tackling Fake News with TELLER: A Trustworthy AI Approach

In the era of information overload, distinguishing between genuine and fake news has become increasingly challenging. The advent of sophisticated AI technologies has further complicated this issue, making it easier than ever to generate convincing but false narratives. To address this, a paper presents TELLER, a novel framework designed for trustworthy fake news detection that focuses on explainability, generalizability, and controllability. This post offers a comprehensive analysis of TELLER, detailing its methodology, evaluation results, and implications for future AI development.

Bridging the Trust Gap in Fake News Detection

TELLER stands for a Trustworthy framework for Explainable, generaLizable, and controllabLe fake news dEtectoR. Its dual-system architecture merges human-like reasoning with AI capabilities to evaluate the truthfulness of news content systematically. The cognitive system within TELLER converts expert knowledge into a series of Yes/No questions, producing logical predicates that outline the steps needed for determining authenticity. This system leverages LLMs to answer these questions, providing a series of logic atoms (basic units of true/false value).

The decision system of TELLER employs a modified Disjunctive Normal Form (DNF) Layer that aggregates these logic atoms into interpretable logic rules. This setup not only ensures that decisions made by the AI can be explained in human-readable form but also enables flexibility in adjusting decision-making criteria based on expert input. Such an approach importantly embeds a layer of human oversight into the AI's operation, enhancing the model's reliability and trustworthiness.

Evaluation and Results

TELLER was rigorously tested across diverse datasets, including LIAR, Constraint, PolitiFact, and GossipCop, showcasing impressive performance in identifying fake news. Its capability to effectively utilize LLMs for generating logic atoms underpinning the decision-making process has been proven to significantly outperform traditional direct prediction methods. Especially noteworthy is TELLER's superior generalizability, which allows it to maintain high accuracy even when applied to news domains not encountered during its training phase.

A noteworthy aspect of TELLER is its explainability. The framework does not just provide a verdict on the authenticity of news but also shares the 'why' behind its decisions by outlining the logical steps taken to reach that conclusion. This feature is crucial for building trust among end-users who seek transparency in AI operations.

Additionally, TELLER offers controllability by enabling adjustments to the question set and logic rules. This means that as misinformation evolves, so too can TELLER, by incorporating new knowledge and expertise into its decision-making framework. This adaptability is essential for keeping pace with the rapidly changing landscape of online information.

The Future of AI in Fake News Detection

TELLER marks a significant step forward in the battle against misinformation, combining the efficiency of AI with the critical judgment of human experts. Its development underlines the importance of trustworthiness in AI applications, particularly those influencing public opinion and discourse.

As AI continues to advance, the principles embedded in TELLER provide a valuable blueprint for future innovations. By prioritizing explainability, generalizability, and controllability, AI systems can be designed not only to tackle complex challenges but also to do so in a manner that enhances trust and transparency.

In the ongoing efforts to curb the spread of fake news, TELLER represents a promising approach that leverages the best of artificial and human intelligence. Its potential implications extend far beyond immediate applications, suggesting a future where AI can be relied upon to uphold truth and integrity in our digital lives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hui Liu (481 papers)
  2. Wenya Wang (40 papers)
  3. Haoru Li (4 papers)
  4. Haoliang Li (67 papers)
Citations (1)