Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence Analysis of the Gaussian Regularized Shannon Sampling Formula (1601.01363v1)

Published 7 Jan 2016 in cs.IT and math.IT

Abstract: We consider the reconstruction of a bandlimited function from its finite localized sample data. Truncating the classical Shannon sampling series results in an unsatisfactory convergence rate due to the slow decayness of the sinc function. To overcome this drawback, a simple and highly effective method, called the Gaussian regularization of the Shannon series, was proposed in the engineering and has received remarkable attention. It works by multiplying the sinc function in the Shannon series with a regularized Gaussian function. L. Qian (Proc. Amer. Math. Soc., 2003) established the convergence rate of $O(\sqrt{n}\exp(-\frac{\pi-\delta}2n))$ for this method, where $\delta<\pi$ is the bandwidth and $n$ is the number of sample data. C. Micchelli {\it et al.} (J. Complexity, 2009) proposed a different regularized method and obtained the corresponding convergence rate of $O(\frac1{\sqrt{n}}\exp(-\frac{\pi-\delta}2n))$. This latter rate is by far the best among all regularized methods for the Shannon series. However, their regularized method involves the solving of a linear system and is implicit and more complicated. The main objective of this note is to show that the Gaussian regularization of the Shannon series can also achieve the same best convergence rate as that by C. Micchelli {\it et al}. We also show that the Gaussian regularization method can improve the convergence rate for the useful average sampling. Finally, the outstanding performance of numerical experiments justifies our results.

Citations (11)

Summary

We haven't generated a summary for this paper yet.