Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths? (2411.05775v1)

Published 8 Nov 2024 in cs.CL and cs.AI

Abstract: Political misinformation poses significant challenges to democratic processes, shaping public opinion and trust in media. Manual fact-checking methods face issues of scalability and annotator bias, while machine learning models require large, costly labelled datasets. This study investigates the use of state-of-the-art LLMs as reliable annotators for detecting political factuality in news articles. Using open-source LLMs, we create a politically diverse dataset, labelled for bias through LLM-generated annotations. These annotations are validated by human experts and further evaluated by LLM-based judges to assess the accuracy and reliability of the annotations. Our approach offers a scalable and robust alternative to traditional fact-checking, enhancing transparency and public trust in media.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Veronica Chatrath (11 papers)
  2. Marcelo Lotif (5 papers)
  3. Shaina Raza (53 papers)

Summary

We haven't generated a summary for this paper yet.