Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Complexity-Optimized Sparse Bayesian Learning for Scalable Classification Tasks (2107.08195v5)

Published 17 Jul 2021 in cs.LG and stat.ML

Abstract: Sparse Bayesian Learning (SBL) constructs an extremely sparse probabilistic model with very competitive generalization. However, SBL needs to invert a big covariance matrix with complexity $O(M3)$ (M: feature size) for updating the regularization priors, making it difficult for problems with high dimensional feature space or large data size. As it may easily suffer from the memory overflow issue in such problems. This paper addresses this issue with a newly proposed diagonal Quasi-Newton (DQN) method for SBL called DQN-SBL where the inversion of big covariance matrix is ignored so that the complexity is reduced to $O(M)$. The DQN-SBL is thoroughly evaluated for non linear and linear classifications with various benchmarks of different sizes. Experimental results verify that DQN-SBL receives competitive generalization with a very sparse model and scales well to large-scale problems.

Summary

We haven't generated a summary for this paper yet.