Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards More Efficient SPSD Matrix Approximation and CUR Matrix Decomposition (1503.08395v6)

Published 29 Mar 2015 in cs.LG

Abstract: Symmetric positive semi-definite (SPSD) matrix approximation methods have been extensively used to speed up large-scale eigenvalue computation and kernel learning methods. The standard sketch based method, which we call the prototype model, produces relatively accurate approximations, but is inefficient on large square matrices. The Nystr\"om method is highly efficient, but can only achieve low accuracy. In this paper we propose a novel model that we call the {\it fast SPSD matrix approximation model}. The fast model is nearly as efficient as the Nystr\"om method and as accurate as the prototype model. We show that the fast model can potentially solve eigenvalue problems and kernel learning problems in linear time with respect to the matrix size $n$ to achieve $1+\epsilon$ relative-error, whereas both the prototype model and the Nystr\"om method cost at least quadratic time to attain comparable error bound. Empirical comparisons among the prototype model, the Nystr\"om method, and our fast model demonstrate the superiority of the fast model. We also contribute new understandings of the Nystr\"om method. The Nystr\"om method is a special instance of our fast model and is approximation to the prototype model. Our technique can be straightforwardly applied to make the CUR matrix decomposition more efficiently computed without much affecting the accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shusen Wang (35 papers)
  2. Zhihua Zhang (118 papers)
  3. Tong Zhang (569 papers)
Citations (3)