Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Computationally Feasible Deep Active Learning (2205.03598v1)

Published 7 May 2022 in cs.CL and cs.LG

Abstract: Active learning (AL) is a prominent technique for reducing the annotation effort required for training machine learning models. Deep learning offers a solution for several essential obstacles to deploying AL in practice but introduces many others. One of such problems is the excessive computational resources required to train an acquisition model and estimate its uncertainty on instances in the unlabeled pool. We propose two techniques that tackle this issue for text classification and tagging tasks, offering a substantial reduction of AL iteration duration and the computational overhead introduced by deep acquisition models in AL. We also demonstrate that our algorithm that leverages pseudo-labeling and distilled models overcomes one of the essential obstacles revealed previously in the literature. Namely, it was shown that due to differences between an acquisition model used to select instances during AL and a successor model trained on the labeled data, the benefits of AL can diminish. We show that our algorithm, despite using a smaller and faster acquisition model, is capable of training a more expressive successor model with higher performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Akim Tsvigun (12 papers)
  2. Artem Shelmanov (29 papers)
  3. Gleb Kuzmin (7 papers)
  4. Leonid Sanochkin (2 papers)
  5. Daniil Larionov (12 papers)
  6. Gleb Gusev (28 papers)
  7. Manvel Avetisian (7 papers)
  8. Leonid Zhukov (11 papers)
Citations (14)