Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models as Fact Checkers? (2006.04102v2)

Published 7 Jun 2020 in cs.CL and cs.AI

Abstract: Recent work has suggested that LLMs (LMs) store both common-sense and factual knowledge learned from pre-training data. In this paper, we leverage this implicit knowledge to create an effective end-to-end fact checker using a solely a LLM, without any external knowledge or explicit retrieval components. While previous work on extracting knowledge from LMs have focused on the task of open-domain question answering, to the best of our knowledge, this is the first work to examine the use of LLMs as fact checkers. In a closed-book setting, we show that our zero-shot LM approach outperforms a random baseline on the standard FEVER task, and that our fine-tuned LM compares favorably with standard baselines. Though we do not ultimately outperform methods which use explicit knowledge bases, we believe our exploration shows that this method is viable and has much room for exploration.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nayeon Lee (28 papers)
  2. Belinda Z. Li (21 papers)
  3. Sinong Wang (45 papers)
  4. Wen-tau Yih (84 papers)
  5. Hao Ma (116 papers)
  6. Madian Khabsa (38 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.