Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compressed Super-Resolution I: Maximal Rank Sum-of-Squares (2001.01644v1)

Published 1 Jan 2020 in math.NA and cs.NA

Abstract: Let $\mu(t) = \sum_{\tau\in S} \alpha_\tau \delta(t-\tau)$ denote an $|S|$-atomic measure defined on $[0,1]$, satisfying $\min_{\tau\neq \tau'}|\tau - \tau'|\geq |S|\cdot n{-1}$. Let $\eta(\theta) = \sum_{\tau\in S} a_\tau D_n(\theta - \tau) + b_\tau D'n(\theta - \tau)$, denote the polynomial obtained from the Dirichlet kernel $D_n(\theta) = \frac{1}{n+1}\sum{|k|\leq n} e{2\pi i k \theta}$ and its derivative by solving the system $\left{\eta(\tau) = 1, \eta'(\tau) = 0,\; \forall \tau \in S\right}$. We provide evidence that for sufficiently large $n$, $\Delta> |S|2 n{-1}$, the non negative polynomial $1 - |\eta(\theta)|2$ which vanishes at the atoms $\tau \in S$, and is bounded by $1$ everywhere else on the $[0,1]$ interval, can be written as a sum-of-squares with associated Gram matrix of rank $n-|S|$. Unlike previous work, our approach does not rely on the Fejer-Riesz Theorem, which prevents developing intuition on the Gram matrix, but requires instead a lower bound on the singular values of a (truncated) large ($O(1e10)$) matrix. Despite the memory requirements which currently prevent dealing with such a matrix efficiently, we show how such lower bounds can be derived through Power iterations and convolutions with special functions for sizes up to $O(1e7)$. We also provide numerical simulations suggesting that the spectrum remains approximately constant with the truncation size as soon as this size is larger than $100$.

Citations (1)

Summary

We haven't generated a summary for this paper yet.