Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Memoryless scalar quantization for random frames (1804.02839v2)

Published 9 Apr 2018 in cs.NA

Abstract: Memoryless scalar quantization (MSQ) is a common technique to quantize frame coefficients of signals (which are used as a model for generalized linear samples), making them compatible with our digital technology. The process of quantization is generally not invertible, and thus one can only recover an approximation to the original signal from its quantized coefficients. The non-linear nature of quantization makes the analysis of the corresponding approximation error challenging, often resulting in the use of a simplifying assumption, called the "white noise hypothesis" (WNH) that simplifies this analysis. However, the WNH is known to be not rigorous and, at least in certain cases, not valid. Given a fixed, deterministic signal, we assume that we use a random frame, whose analysis matrix has independent isotropic sub-Gaussian rows, to collect the measurements, which are consecutively quantized via MSQ. For this setting, the numerically observed decay rate seems to agree with the prediction by the WNH. We rigorously establish sharp non-asymptotic error bounds without using the WNH that explains the observed decay rate. Furthermore, we show that the reconstruction error does not necessarily diminish as redundancy increases. We also extend this approach to the compressed sensing setting, obtaining rigorous error bounds that agree with empirical observations, again, without resorting to the WNH.

Citations (1)

Summary

We haven't generated a summary for this paper yet.