Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPLBoost: An Improved Robust Boosting Algorithm Based on Self-paced Learning (1706.06341v2)

Published 20 Jun 2017 in cs.CV, cs.LG, and stat.ML

Abstract: It is known that Boosting can be interpreted as a gradient descent technique to minimize an underlying loss function. Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which is proved to be very sensitive to random noise/outliers. Therefore, several Boosting algorithms, e.g., LogitBoost and SavageBoost, have been proposed to improve the robustness of AdaBoost by replacing the exponential loss with some designed robust loss functions. In this work, we present a new way to robustify AdaBoost, i.e., incorporating the robust learning idea of Self-paced Learning (SPL) into Boosting framework. Specifically, we design a new robust Boosting algorithm based on SPL regime, i.e., SPLBoost, which can be easily implemented by slightly modifying off-the-shelf Boosting packages. Extensive experiments and a theoretical characterization are also carried out to illustrate the merits of the proposed SPLBoost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kaidong Wang (9 papers)
  2. Yao Wang (331 papers)
  3. Qian Zhao (125 papers)
  4. Deyu Meng (182 papers)
  5. Zongben Xu (94 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.