Papers
Topics
Authors
Recent
Search
2000 character limit reached

Memoryless scalar quantization for random frames

Published 9 Apr 2018 in cs.NA | (1804.02839v2)

Abstract: Memoryless scalar quantization (MSQ) is a common technique to quantize frame coefficients of signals (which are used as a model for generalized linear samples), making them compatible with our digital technology. The process of quantization is generally not invertible, and thus one can only recover an approximation to the original signal from its quantized coefficients. The non-linear nature of quantization makes the analysis of the corresponding approximation error challenging, often resulting in the use of a simplifying assumption, called the "white noise hypothesis" (WNH) that simplifies this analysis. However, the WNH is known to be not rigorous and, at least in certain cases, not valid. Given a fixed, deterministic signal, we assume that we use a random frame, whose analysis matrix has independent isotropic sub-Gaussian rows, to collect the measurements, which are consecutively quantized via MSQ. For this setting, the numerically observed decay rate seems to agree with the prediction by the WNH. We rigorously establish sharp non-asymptotic error bounds without using the WNH that explains the observed decay rate. Furthermore, we show that the reconstruction error does not necessarily diminish as redundancy increases. We also extend this approach to the compressed sensing setting, obtaining rigorous error bounds that agree with empirical observations, again, without resorting to the WNH.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.