Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Importance of Effectively Adapting Pretrained Language Models for Active Learning (2104.08320v2)

Published 16 Apr 2021 in cs.CL

Abstract: Recent Active Learning (AL) approaches in NLP proposed using off-the-shelf pretrained LLMs (LMs). In this paper, we argue that these LMs are not adapted effectively to the downstream task during AL and we explore ways to address this issue. We suggest to first adapt the pretrained LM to the target task by continuing training with all the available unlabeled data and then use it for AL. We also propose a simple yet effective fine-tuning method to ensure that the adapted LM is properly trained in both low and high resource scenarios during AL. Our experiments demonstrate that our approach provides substantial data efficiency improvements compared to the standard fine-tuning approach, suggesting that a poor training strategy can be catastrophic for AL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Katerina Margatina (14 papers)
  2. Loïc Barrault (34 papers)
  3. Nikolaos Aletras (72 papers)
Citations (35)