Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Rate-Distortion Function and Excess-Distortion Exponent of Sparse Regression Codes with Optimal Encoding (1401.5272v6)

Published 21 Jan 2014 in cs.IT, math.IT, math.ST, and stat.TH

Abstract: This paper studies the performance of sparse regression codes for lossy compression with the squared-error distortion criterion. In a sparse regression code, codewords are linear combinations of subsets of columns of a design matrix. It is shown that with minimum-distance encoding, sparse regression codes achieve the Shannon rate-distortion function for i.i.d. Gaussian sources $R*(D)$ as well as the optimal excess-distortion exponent. This completes a previous result which showed that $R*(D)$ and the optimal exponent were achievable for distortions below a certain threshold. The proof of the rate-distortion result is based on the second moment method, a popular technique to show that a non-negative random variable $X$ is strictly positive with high probability. In our context, $X$ is the number of codewords within target distortion $D$ of the source sequence. We first identify the reason behind the failure of the standard second moment method for certain distortions, and illustrate the different failure modes via a stylized example. We then use a refinement of the second moment method to show that $R*(D)$ is achievable for all distortion values. Finally, the refinement technique is applied to Suen's correlation inequality to prove the achievability of the optimal Gaussian excess-distortion exponent.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ramji Venkataramanan (45 papers)
  2. Sekhar Tatikonda (33 papers)
Citations (2)