Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Doubly Robust Self-Training (2306.00265v3)

Published 1 Jun 2023 in cs.LG, cs.AI, cs.CV, eess.IV, and stat.ML

Abstract: Self-training is an important technique for solving semi-supervised learning problems. It leverages unlabeled data by generating pseudo-labels and combining them with a limited labeled dataset for training. The effectiveness of self-training heavily relies on the accuracy of these pseudo-labels. In this paper, we introduce doubly robust self-training, a novel semi-supervised algorithm that provably balances between two extremes. When the pseudo-labels are entirely incorrect, our method reduces to a training process solely using labeled data. Conversely, when the pseudo-labels are completely accurate, our method transforms into a training process utilizing all pseudo-labeled data and labeled data, thus increasing the effective sample size. Through empirical evaluations on both the ImageNet dataset for image classification and the nuScenes autonomous driving dataset for 3D object detection, we demonstrate the superiority of the doubly robust loss over the standard self-training baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Banghua Zhu (38 papers)
  2. Mingyu Ding (82 papers)
  3. Philip Jacobson (4 papers)
  4. Ming Wu (43 papers)
  5. Wei Zhan (130 papers)
  6. Michael Jordan (28 papers)
  7. Jiantao Jiao (83 papers)
Citations (4)