Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Algorithm-agnostic low-rank approximation of operator monotone matrix functions (2311.14023v2)

Published 23 Nov 2023 in math.NA, cs.DS, and cs.NA

Abstract: Low-rank approximation of a matrix function, $f(A)$, is an important task in computational mathematics. Most methods require direct access to $f(A)$, which is often considerably more expensive than accessing $A$. Persson and Kressner (SIMAX 2023) avoid this issue for symmetric positive semidefinite matrices by proposing funNystr\"om, which first constructs a Nystr\"om approximation to $A$ using subspace iteration, and then uses the approximation to directly obtain a low-rank approximation for $f(A)$. They prove that the method yields a near-optimal approximation whenever $f$ is a continuous operator monotone function with $f(0) = 0$. We significantly generalize the results of Persson and Kressner beyond subspace iteration. We show that if $\widehat{A}$ is a near-optimal low-rank Nystr\"om approximation to $A$ then $f(\widehat{A})$ is a near-optimal low-rank approximation to $f(A)$, independently of how $\widehat{A}$ is computed. Further, we show sufficient conditions for a basis $Q$ to produce a near-optimal Nystr\"om approximation $\widehat{A} = AQ(QT AQ){\dagger} QT A$. We use these results to establish that many common low-rank approximation methods produce near-optimal Nystr\"om approximations to $A$ and therefore to $f(A)$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. David Persson (9 papers)
  2. Raphael A. Meyer (8 papers)
  3. Christopher Musco (66 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.