Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 187 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 177 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Dash: Semi-Supervised Learning with Dynamic Thresholding (2109.00650v1)

Published 1 Sep 2021 in cs.LG, cs.CV, and stat.ML

Abstract: While semi-supervised learning (SSL) has received tremendous attentions in many machine learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either all unlabeled examples or the unlabeled examples with a fixed high-confidence prediction during the training progress. However, it is possible that too many correct/wrong pseudo labeled examples are eliminated/selected. In this work we develop a simple yet powerful framework, whose key idea is to select a subset of training examples from the unlabeled data when performing existing SSL methods so that only the unlabeled examples with pseudo labels related to the labeled data will be used to train models. The selection is performed at each updating iteration by only keeping the examples whose losses are smaller than a given threshold that is dynamically adjusted through the iteration. Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection and its theoretical guarantee. Specifically, we theoretically establish the convergence rate of Dash from the view of non-convex optimization. Finally, we empirically demonstrate the effectiveness of the proposed method in comparison with state-of-the-art over benchmarks.

Citations (197)

Summary

Semi-Supervised Learning with Dynamic Thresholding: Overview of Dash

The paper introduces a semi-supervised learning framework named Dash, which focuses on optimizing the selection of unlabeled examples in machine learning models by utilizing dynamic thresholding. It targets the inherent challenges of semi-supervised learning (SSL) associated with the integration of labeled and unlabeled data—particularly the issue of accurately selecting data samples with pseudo-labels that originate from the same distribution as the labeled data.

Key Concepts and Methodology

Dash proposes an innovative strategy for SSL by dynamically selecting training examples from unlabeled data based on their loss values relative to a decreasing threshold. This dynamic thresholding mechanism is designed to adapt throughout the training iterations, modulating which pseudo-labeled examples are incorporated in model refinement. The proposed method dynamically adjusts the threshold value over time, eventually honing in on a more relevant subset of data as indicated by smaller loss values. This approach is distinctive because it accounts for the variability and uncertainty inherent to unlabeled data, differentiating it from static methods such as FixMatch that rely on a fixed confidence threshold.

Theoretical Underpinnings

The theoretical framework supporting Dash is robustly grounded in non-convex optimization principles. The paper not only introduces the Dash algorithm but also offers a theoretical guarantee regarding its convergence rate. The convergence is tackled from the perspective of Polyak-Łojasiewicz (PL) condition under non-convex settings, which has been increasingly recognized for its applicability in deep learning optimization paradigms. An inductive proof approach is employed to ensure that each iteration of training maintains a high probability of achieving diminishing loss, thereby ensuring model efficacy and reliability over ongoing iterations.

Experimental Findings

The empirical evaluation delineated in the paper demonstrates Dash's effectiveness against state-of-the-art SSL algorithms such as MixMatch, UDA, ReMixMatch, and FixMatch across benchmark datasets like CIFAR-10, CIFAR-100, SVHN, and STL-10. The results indicate that Dash consistently achieves superior or comparable performance, with its advantage being particularly pronounced in scenarios where labeled data is scant. Notably, Dash exhibited remarkable improvements over FixMatch, ranging from approximately 10% to 58% in error rate reductions across different scenarios, underscoring the paramount impact of its adaptive thresholding in semi-supervised contexts.

Implications and Future Directions

The implications of Dash extend beyond its immediate performance metrics; it presents a generalizable framework that could be integrated with existing developmental SSL methodologies. By enabling more refined control over unlabeled data selection, Dash enhances model robustness and accuracy, potentially paving the way for less label-dependent AI systems—a crucial factor in domains where labeled data is sparse or costly to acquire.

Furthermore, the paper suggests several avenues for future work, including exploration in various domains such as natural language processing and object detection, where dynamic thresholding mechanisms can be customized further. It hints at broader applicability in dynamically adjusting model component parameters across diverse learning architectures, fostering continual improvements in minimizing error rates and optimizing learning curves.

Given the rising influence of SSL in AI applications, Dash contributes substantively to the dialogue around enhancing machine learning models' ability to leverage unlabeled data strategically, ensuring sustainable advancements in autonomous learning frameworks.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.