Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iterative Data Programming for Expanding Text Classification Corpora (2002.01412v1)

Published 4 Feb 2020 in cs.LG and cs.CL

Abstract: Real-world text classification tasks often require many labeled training examples that are expensive to obtain. Recent advancements in machine teaching, specifically the data programming paradigm, facilitate the creation of training data sets quickly via a general framework for building weak models, also known as labeling functions, and denoising them through ensemble learning techniques. We present a fast, simple data programming method for augmenting text data sets by generating neighborhood-based weak models with minimal supervision. Furthermore, our method employs an iterative procedure to identify sparsely distributed examples from large volumes of unlabeled data. The iterative data programming techniques improve newer weak models as more labeled data is confirmed with human-in-loop. We show empirical results on sentence classification tasks, including those from a task of improving intent recognition in conversational agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Neil Mallinar (12 papers)
  2. Abhishek Shah (12 papers)
  3. Tin Kam Ho (3 papers)
  4. Rajendra Ugrani (2 papers)
  5. Ayush Gupta (36 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.