Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Creating Training Sets via Weak Indirect Supervision (2110.03484v3)

Published 7 Oct 2021 in cs.LG, stat.AP, and stat.ML

Abstract: Creating labeled training sets has become one of the major roadblocks in machine learning. To address this, recent \emph{Weak Supervision (WS)} frameworks synthesize training labels from multiple potentially noisy supervision sources. However, existing frameworks are restricted to supervision sources that share the same output space as the target task. To extend the scope of usable sources, we formulate Weak Indirect Supervision (WIS), a new research problem for automatically synthesizing training labels based on indirect supervision sources that have different output label spaces. To overcome the challenge of mismatched output spaces, we develop a probabilistic modeling approach, PLRM, which uses user-provided label relations to model and leverage indirect supervision sources. Moreover, we provide a theoretically-principled test of the distinguishability of PLRM for unseen labels, along with a generalization bound. On both image and text classification tasks as well as an industrial advertising application, we demonstrate the advantages of PLRM by outperforming baselines by a margin of 2%-9%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jieyu Zhang (63 papers)
  2. Bohan Wang (42 papers)
  3. Xiangchen Song (22 papers)
  4. Yujing Wang (53 papers)
  5. Yaming Yang (39 papers)
  6. Jing Bai (46 papers)
  7. Alexander Ratner (24 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.