Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linearized Additive Classifiers (1110.0879v1)

Published 5 Oct 2011 in cs.CV, cs.AI, and cs.LG

Abstract: We revisit the additive model learning literature and adapt a penalized spline formulation due to Eilers and Marx, to train additive classifiers efficiently. We also propose two new embeddings based two classes of orthogonal basis with orthogonal derivatives, which can also be used to efficiently learn additive classifiers. This paper follows the popular theme in the current literature where kernel SVMs are learned much more efficiently using a approximate embedding and linear machine. In this paper we show that spline basis are especially well suited for learning additive models because of their sparsity structure and the ease of computing the embedding which enables one to train these models in an online manner, without incurring the memory overhead of precomputing the storing the embeddings. We show interesting connections between B-Spline basis and histogram intersection kernel and show that for a particular choice of regularization and degree of the B-Splines, our proposed learning algorithm closely approximates the histogram intersection kernel SVM. This enables one to learn additive models with almost no memory overhead compared to fast a linear solver, such as LIBLINEAR, while being only 5-6X slower on average. On two large scale image classification datasets, MNIST and Daimler Chrysler pedestrians, the proposed additive classifiers are as accurate as the kernel SVM, while being two orders of magnitude faster to train.

Citations (2)

Summary

We haven't generated a summary for this paper yet.