Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilingual and Multi-topical Benchmark of Fine-tuned Language models and Large Language Models for Check-Worthy Claim Detection (2311.06121v2)

Published 10 Nov 2023 in cs.CL

Abstract: This study compares the performance of (1) fine-tuned LLMs and (2) LLMs on the task of check-worthy claim detection. For the purpose of the comparison we composed a multilingual and multi-topical dataset comprising texts of various sources and styles. Building on this, we performed a benchmark analysis to determine the most general multilingual and multi-topical claim detector. We chose three state-of-the-art models in the check-worthy claim detection task and fine-tuned them. Furthermore, we selected four state-of-the-art LLMs without any fine-tuning. We made modifications to the models to adapt them for multilingual settings and through extensive experimentation and evaluation, we assessed the performance of all the models in terms of accuracy, recall, and F1-score in in-domain and cross-domain scenarios. Our results demonstrate that despite the technological progress in the area of natural language processing, the models fine-tuned for the task of check-worthy claim detection still outperform the zero-shot approaches in cross-domain settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Martin Hyben (4 papers)
  2. Sebastian Kula (2 papers)
  3. Ivan Srba (28 papers)
  4. Robert Moro (22 papers)
  5. Jakub Simko (18 papers)
Citations (1)