Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths? (2411.05775v1)
Abstract: Political misinformation poses significant challenges to democratic processes, shaping public opinion and trust in media. Manual fact-checking methods face issues of scalability and annotator bias, while machine learning models require large, costly labelled datasets. This study investigates the use of state-of-the-art LLMs as reliable annotators for detecting political factuality in news articles. Using open-source LLMs, we create a politically diverse dataset, labelled for bias through LLM-generated annotations. These annotations are validated by human experts and further evaluated by LLM-based judges to assess the accuracy and reliability of the annotations. Our approach offers a scalable and robust alternative to traditional fact-checking, enhancing transparency and public trust in media.
- Veronica Chatrath (11 papers)
- Marcelo Lotif (5 papers)
- Shaina Raza (53 papers)