Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic variance reduced gradient method for linear ill-posed inverse problems (2403.12460v1)

Published 19 Mar 2024 in math.NA, cs.NA, and math.OC

Abstract: In this paper we apply the stochastic variance reduced gradient (SVRG) method, which is a popular variance reduction method in optimization for accelerating the stochastic gradient method, to solve large scale linear ill-posed systems in Hilbert spaces. Under {\it a priori} choices of stopping indices, we derive a convergence rate result when the sought solution satisfies a benchmark source condition and establish a convergence result without using any source condition. To terminate the method in an {\it a posteriori} manner, we consider the discrepancy principle and show that it terminates the method in finite many iteration steps almost surely. Various numerical results are reported to test the performance of the method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Qinian Jin (24 papers)
  2. Liuhong Chen (4 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.