Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A minimax optimal approach to high-dimensional double sparse linear regression (2305.04182v2)

Published 7 May 2023 in math.ST and stat.TH

Abstract: In this paper, we focus our attention on the high-dimensional double sparse linear regression, that is, a combination of element-wise and group-wise sparsity. To address this problem, we propose an IHT-style (iterative hard thresholding) procedure that dynamically updates the threshold at each step. We establish the matching upper and lower bounds for parameter estimation, showing the optimality of our proposal in the minimax sense. More importantly, we introduce a fully adaptive optimal procedure designed to address unknown sparsity and noise levels. Our adaptive procedure demonstrates optimal statistical accuracy with fast convergence. Additionally, we elucidate the significance of the element-wise sparsity level $s_0$ as the trade-off between IHT and group IHT, underscoring the superior performance of our method over both. Leveraging the beta-min condition, we establish that our IHT-style procedure can attain the oracle estimation rate and achieve almost full recovery of the true support set at both the element level and group level. Finally, we demonstrate the superiority of our method by comparing it with several state-of-the-art algorithms on both synthetic and real-world datasets.

Summary

We haven't generated a summary for this paper yet.