Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Alternating minimization for dictionary learning: Local Convergence Guarantees (1711.03634v4)

Published 9 Nov 2017 in stat.ML and cs.LG

Abstract: We present theoretical guarantees for an alternating minimization algorithm for the dictionary learning/sparse coding problem. The dictionary learning problem is to factorize vector samples $y{1},y{2},\ldots, y{n}$ into an appropriate basis (dictionary) $A*$ and sparse vectors $x{1*},\ldots,x{n*}$. Our algorithm is a simple alternating minimization procedure that switches between $\ell_1$ minimization and gradient descent in alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically. However, in contrast to previous theoretical analyses for this problem, we replace a condition on the operator norm (that is, the largest magnitude singular value) of the true underlying dictionary $A*$ with a condition on the matrix infinity norm (that is, the largest magnitude term). Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. We also establish upper bounds on the sample complexity of our algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Niladri S. Chatterji (21 papers)
  2. Peter L. Bartlett (86 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.