Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaBoost and Forward Stagewise Regression are First-Order Convex Optimization Methods (1307.1192v1)

Published 4 Jul 2013 in stat.ML, cs.LG, and math.OC

Abstract: Boosting methods are highly popular and effective supervised learning methods which combine weak learners into a single accurate model with good statistical performance. In this paper, we analyze two well-known boosting methods, AdaBoost and Incremental Forward Stagewise Regression (FS$\varepsilon$), by establishing their precise connections to the Mirror Descent algorithm, which is a first-order method in convex optimization. As a consequence of these connections we obtain novel computational guarantees for these boosting methods. In particular, we characterize convergence bounds of AdaBoost, related to both the margin and log-exponential loss function, for any step-size sequence. Furthermore, this paper presents, for the first time, precise computational complexity results for FS$\varepsilon$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Robert M. Freund (18 papers)
  2. Paul Grigas (23 papers)
  3. Rahul Mazumder (80 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.