Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Confidence-based Acquisition Model for Self-supervised Active Learning and Label Correction (2310.08944v2)

Published 13 Oct 2023 in cs.CL and cs.LG

Abstract: Supervised neural approaches are hindered by their dependence on large, meticulously annotated datasets, a requirement that is particularly cumbersome for sequential tasks. The quality of annotations tends to deteriorate with the transition from expert-based to crowd-sourced labelling. To address these challenges, we present CAMEL (Confidence-based Acquisition Model for Efficient self-supervised active Learning), a pool-based active learning framework tailored to sequential multi-output problems. CAMEL possesses two core features: (1) it requires expert annotators to label only a fraction of a chosen sequence, and (2) it facilitates self-supervision for the remainder of the sequence. By deploying a label correction mechanism, CAMEL can also be utilised for data cleaning. We evaluate CAMEL on two sequential tasks, with a special emphasis on dialogue belief tracking, a task plagued by the constraints of limited and noisy datasets. Our experiments demonstrate that CAMEL significantly outperforms the baselines in terms of efficiency. Furthermore, the data corrections suggested by our method contribute to an overall improvement in the quality of the resulting datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Carel van Niekerk (23 papers)
  2. Christian Geishauser (19 papers)
  3. Michael Heck (23 papers)
  4. Shutong Feng (19 papers)
  5. Hsien-chin Lin (22 papers)
  6. Nurul Lubis (21 papers)
  7. Benjamin Ruppik (11 papers)
  8. Renato Vukovic (10 papers)
  9. Milica Gašić (57 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.