Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evidence-backed Fact Checking using RAG and Few-Shot In-Context Learning with LLMs (2408.12060v2)

Published 22 Aug 2024 in cs.CL and cs.AI

Abstract: Given the widespread dissemination of misinformation on social media, implementing fact-checking mechanisms for online claims is essential. Manually verifying every claim is very challenging, underscoring the need for an automated fact-checking system. This paper presents our system designed to address this issue. We utilize the Averitec dataset (Schlichtkrull et al., 2023) to assess the performance of our fact-checking system. In addition to veracity prediction, our system provides supporting evidence, which is extracted from the dataset. We develop a Retrieve and Generate (RAG) pipeline to extract relevant evidence sentences from a knowledge base, which are then inputted along with the claim into a LLM for classification. We also evaluate the few-shot In-Context Learning (ICL) capabilities of multiple LLMs. Our system achieves an 'Averitec' score of 0.33, which is a 22% absolute improvement over the baseline. Our Code is publicly available on https://github.com/ronit-singhal/evidence-backed-fact-checking-using-rag-and-few-shot-in-context-learning-with-LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ronit Singhal (1 paper)
  2. Pransh Patwa (2 papers)
  3. Parth Patwa (28 papers)
  4. Aman Chadha (110 papers)
  5. Amitava Das (45 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.