Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Still Wins over LLM: An Empirical Study of Active Learning on Domain-Specific Annotation Tasks (2311.09825v1)

Published 16 Nov 2023 in cs.CL

Abstract: LLMs have demonstrated considerable advances, and several claims have been made about their exceeding human performance. However, in real-world tasks, domain knowledge is often required. Low-resource learning methods like Active Learning (AL) have been proposed to tackle the cost of domain expert annotation, raising this question: Can LLMs surpass compact models trained with expert annotations in domain-specific tasks? In this work, we conduct an empirical experiment on four datasets from three different domains comparing SOTA LLMs with small models trained on expert annotations with AL. We found that small models can outperform GPT-3.5 with a few hundreds of labeled data, and they achieve higher or similar performance with GPT-4 despite that they are hundreds time smaller. Based on these findings, we posit that LLM predictions can be used as a warmup method in real-world applications and human experts remain indispensable in tasks involving data annotation driven by domain-specific knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuxuan Lu (26 papers)
  2. Bingsheng Yao (49 papers)
  3. Shao Zhang (18 papers)
  4. Yun Wang (229 papers)
  5. Peng Zhang (641 papers)
  6. Tun Lu (38 papers)
  7. Toby Jia-Jun Li (57 papers)
  8. Dakuo Wang (87 papers)
Citations (11)