Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neighborhood-Regularized Self-Training for Learning with Few Labels (2301.03726v2)

Published 10 Jan 2023 in cs.LG and cs.CL

Abstract: Training deep neural networks (DNNs) with limited supervision has been a popular research topic as it can significantly alleviate the annotation burden. Self-training has been successfully applied in semi-supervised learning tasks, but one drawback of self-training is that it is vulnerable to the label noise from incorrect pseudo labels. Inspired by the fact that samples with similar labels tend to share similar representations, we develop a neighborhood-based sample selection approach to tackle the issue of noisy pseudo labels. We further stabilize self-training via aggregating the predictions from different rounds during sample selection. Experiments on eight tasks show that our proposed method outperforms the strongest self-training baseline with 1.83% and 2.51% performance gain for text and graph datasets on average. Our further analysis demonstrates that our proposed data selection strategy reduces the noise of pseudo labels by 36.8% and saves 57.3% of the time when compared with the best baseline. Our code and appendices will be uploaded to https://github.com/ritaranx/NeST.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ran Xu (89 papers)
  2. Yue Yu (343 papers)
  3. Hejie Cui (33 papers)
  4. Xuan Kan (18 papers)
  5. Yanqiao Zhu (45 papers)
  6. Joyce Ho (8 papers)
  7. Chao Zhang (907 papers)
  8. Carl Yang (130 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.