Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Online Boosting Algorithm with Theoretical Justifications (1206.6422v1)

Published 27 Jun 2012 in cs.LG and stat.ML

Abstract: We study the task of online boosting--combining online weak learners into an online strong learner. While batch boosting has a sound theoretical foundation, online boosting deserves more study from the theoretical perspective. In this paper, we carefully compare the differences between online and batch boosting, and propose a novel and reasonable assumption for the online weak learner. Based on the assumption, we design an online boosting algorithm with a strong theoretical guarantee by adapting from the offline SmoothBoost algorithm that matches the assumption closely. We further tackle the task of deciding the number of weak learners using established theoretical results for online convex programming and predicting with expert advice. Experiments on real-world data sets demonstrate that the proposed algorithm compares favorably with existing online boosting algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shang-Tse Chen (28 papers)
  2. Hsuan-Tien Lin (43 papers)
  3. Chi-Jen Lu (14 papers)
Citations (80)

Summary

We haven't generated a summary for this paper yet.