Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding (1202.0840v4)

Published 3 Feb 2012 in cs.IT, math.IT, and stat.ML

Abstract: We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimum-distance encoding, we show that such a code can attain the Shannon rate-distortion function with the optimal error exponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance $\sigma2$ with (squared-error) distortion $D$ can compress any ergodic source of variance less than $\sigma2$ to within distortion $D$. Thus the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having having a compact representation in terms of a matrix whose size is a low-order polynomial in the block-length.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ramji Venkataramanan (45 papers)
  2. Antony Joseph (14 papers)
  3. Sekhar Tatikonda (33 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.