Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLRMA: Sparse Low-Rank Matrix Approximation for Data Compression (1507.01673v2)

Published 7 Jul 2015 in cs.MM

Abstract: Low-rank matrix approximation (LRMA) is a powerful technique for signal processing and pattern analysis. However, its potential for data compression has not yet been fully investigated in the literature. In this paper, we propose sparse low-rank matrix approximation (SLRMA), an effective computational tool for data compression. SLRMA extends the conventional LRMA by exploring both the intra- and inter-coherence of data samples simultaneously. With the aid of prescribed orthogonal transforms (e.g., discrete cosine/wavelet transform and graph transform), SLRMA decomposes a matrix into a product of two smaller matrices, where one matrix is made of extremely sparse and orthogonal column vectors, and the other consists of the transform coefficients. Technically, we formulate SLRMA as a constrained optimization problem, i.e., minimizing the approximation error in the least-squares sense regularized by $\ell_0$-norm and orthogonality, and solve it using the inexact augmented Lagrangian multiplier method. Through extensive tests on real-world data, such as 2D image sets and 3D dynamic meshes, we observe that (i) SLRMA empirically converges well; (ii) SLRMA can produce approximation error comparable to LRMA but in a much sparse form; (iii) SLRMA-based compression schemes significantly outperform the state-of-the-art in terms of rate-distortion performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Junhui Hou (138 papers)
  2. Lap-Pui Chau (57 papers)
  3. Nadia Magnenat-Thalmann (7 papers)
  4. Ying He (102 papers)

Summary

We haven't generated a summary for this paper yet.