Randomized approximation of summable sequences -- adaptive and non-adaptive (2308.01705v3)
Abstract: We prove lower bounds for the randomized approximation of the embedding $\ell_1m \rightarrow \ell_\inftym$ based on algorithms that use arbitrary linear (hence non-adaptive) information provided by a (randomized) measurement matrix $N \in \mathbb{R}{n \times m}$. These lower bounds reflect the increasing difficulty of the problem for $m \to \infty$, namely, a term $\sqrt{\log m}$ in the complexity $n$. This result implies that non-compact operators between arbitrary Banach spaces are not approximable using non-adaptive Monte Carlo methods. We also compare these lower bounds for non-adaptive methods with upper bounds based on adaptive, randomized methods for recovery for which the complexity $n$ only exhibits a $(\log\log m)$-dependence. In doing so we give an example of linear problems where the error for adaptive vs. non-adaptive Monte Carlo methods shows a gap of order $n{1/2} ( \log n){-1/2}$.