Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 49 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Sparse Linear Regression

Updated 23 September 2025
  • Sparse linear regression models are high-dimensional statistical frameworks that enforce parsimony by selecting only a small subset of nonzero coefficients.
  • They employ iterative, scale-adaptive algorithms like the scaled lasso to jointly update regression coefficients and noise estimates for improved accuracy.
  • Empirical and theoretical advances, including oracle inequalities and asymptotic normality, reinforce their reliability in variable selection and prediction.

Sparse linear regression models are high-dimensional statistical frameworks that seek parsimonious representations of the relationship between a set of predictors and a response variable by promoting solutions with only a small subset of nonzero coefficients. Such models are fundamental in modern statistical inference due to their ability to address the challenges of high-dimensionality, interpretability, and variable selection. They underpin algorithms such as the lasso, basis pursuit, and an expanding range of convex and nonconvex formulation advancements. In contemporary research, questions of computational tractability, statistical optimality, estimator consistency, noise level estimation, and uncertainty quantification are central considerations.

1. Canonical Formulation and Algorithmic Developments

Sparse linear regression models are typically framed as the solution to problems of the form

minβRp12nyXβ22+λR(β)\min_{\beta \in \mathbb{R}^p} \, \frac{1}{2n}\|y - X\beta\|_2^2 + \lambda \, \mathcal{R}(\beta)

where yRny \in \mathbb{R}^n is the response, XRn×pX \in \mathbb{R}^{n \times p} is the design matrix, λ>0\lambda > 0 is a tuning parameter, and R(β)\mathcal{R}(\beta) is a penalty that promotes sparsity, such as the 1\ell_1-norm or various nonconvex penalties.

A central advance is the scaled sparse linear regression framework (Sun et al., 2011), in which both the regression coefficients β\beta and the noise level σ\sigma are estimated jointly via the minimization of a convex penalized loss. The critical update steps involve:

  • Estimating σ\sigma from the scaled residuals,
  • Updating β\beta by solving a penalized least-squares problem with a penalty level proportional to the current estimate of σ\sigma,
  • Iterating until convergence to a stationary point of the joint loss.

A special case is the scaled lasso, where the penalty is purely 1\ell_1 and the loss reduces to joint minimization over

L0(β,σ)=yXβ222nσ+σ2+λ0β1.L_0(\beta, \sigma) = \frac{\|y - X\beta\|_2^2}{2n\sigma} + \frac{\sigma}{2} + \lambda_0 \|\beta\|_1.

This approach yields scale-equivariant estimation and efficient algorithms that exploit solution paths and convex structure, requiring little more computation than standard lasso solution path tracking.

2. Oracle Inequalities and Theoretical Guarantees

Critical theoretical contributions in this area consist of oracle inequalities that upper-bound the prediction and estimation errors of the regression coefficients and noise level. Under regularity conditions (notably, compatibility factors of the design matrix), the scaled lasso and related estimators achieve risk close to that of an oracle estimator that knows the correct sparsity pattern. Specifically, for a properly chosen penalty λ0\lambda_0 and bounded maximal correlation z=X(yXβ)/(nσ)z^* = \|X^\top (y - X\beta^*)\|_\infty / (n\sigma^*), the scaled lasso estimator (β^,σ^)(\hat\beta, \hat\sigma) satisfies: max(1σ^σ,1σσ^)τ0,\max\left(1 - \frac{\hat{\sigma}}{\sigma^*}, 1 - \frac{\sigma^*}{\hat{\sigma}}\right) \leq \tau_0,

Xβ^Xβ2nστ01τ0,\frac{\|X\hat{\beta} - X\beta^*\|_2}{\sqrt{n}\sigma^*} \leq \frac{\tau_0}{1-\tau_0},

where τ0\tau_0 is an explicit rate determined by the design compatibility and penalty level. This theoretical apparatus quantifies the proximity of the scaled lasso to hypothetical optimal sparse estimators.

Furthermore, the scaled lasso estimator for noise level achieves asymptotic normality: n(σ^σ1)N(0,1/2)\sqrt{n} \left(\frac{\hat{\sigma}}{\sigma} - 1\right) \to N(0, 1/2) under high-dimensional regimes (pnp \gg n) provided effective sparsity is suitably small and the design satisfies compatibility (or restricted eigenvalue) conditions.

3. Algorithmic Structure and Computational Considerations

The scaled sparse linear regression algorithm involves alternating steps:

  • Update the noise estimate: σ^yXβold2(1a)n\hat{\sigma} \leftarrow \frac{\|y - X\beta^{\text{old}}\|_2}{\sqrt{(1-a)n}} with degrees of freedom adjustment a0a \geq 0,
  • Update the coefficient estimate along the solution path: βnewβ(σ^0)\beta^{\text{new}} \leftarrow \beta(\hat{\sigma}_0) according to a grid of penalty levels.

This process is computationally efficient, requiring essentially solution path computation for a sparse regression estimator (e.g., lasso, SCAD, MCP) for penalty parameters above a baseline threshold. When the penalty is 1\ell_1, the entire procedure is a joint gradient descent over a convex penalized likelihood and is provably convergent.

The key to tractability is leveraging solution paths (as in coordinate descent or homotopy algorithms for lasso-type problems), combined with closed-form updates for the variance parameter in each iteration.

4. Empirical Performance and Practical Impact

Empirical studies in (Sun et al., 2011) demonstrate that the scaled lasso and its nonconvex penalty extensions (e.g., scaled MCP) achieve lower bias and variability in estimation of the noise level and regression coefficients compared to standard joint convex minimization schemes and cross-validated lasso. Performance gains are most pronounced when model selection is accompanied by least-squares refitting. Extensive simulation scenarios, including cases with many weak nonzero coefficients and unfavorable signal-to-noise, confirm that these methods achieve empirical error bounds consistent with theoretical rates.

Real-data applications (e.g., high-dimensional gene expression prediction) show that the proposed methods can select significantly sparser and more stable models without sacrificing predictive accuracy, often matching or exceeding the prediction mean squared error of lasso fit with optimally tuned penalty. Stability selection procedures further suggest that estimators such as scaled MCP are particularly parsimonious and reproducible.

5. Connections to Broader High-Dimensional Inference

The joint estimation of regression coefficients and noise level in the scaled sparse linear regression model has noteworthy implications:

  • It enables inference in scenarios where the penalty must adapt to the unknown noise level, which is intrinsic in high-dimensional applications.
  • The compatibility (or restricted eigenvalue) conditions underpinning the theory are closely tied to emerging results in estimation and uncertainty quantification in high-dimensional models.
  • The oracle inequalities and asymptotic normality extend to post-selection least squares estimators, highlighting the utility for both point estimation and constructed confidence intervals.

The methodology informs regularization and variable selection across a spectrum of penalized regression contexts, extending naturally to nonconvex penalties and providing a template for the construction of adaptive scale-equivariant inference procedures.

6. Limitations and Extensions

Key limitations include the assumption of suitable compatibility or restricted eigenvalue properties for the design, as well as the need for effective sparsity to be low relative to the sample size. Model selection for exact support recovery remains challenging in the presence of weak signals or high collinearity, and the scaled procedure primarily controls estimation and prediction error rates rather than precise support identification. Nevertheless, refinements (such as stability selection post-scaling) ameliorate practical variable selection.

Extensions encompass refitting after model selection, application to nonconvex penalties (e.g., MCP, SCAD), and generalization to model frameworks where the penalty adapts to other unknown nuisance parameters.


In summary, the scaled sparse linear regression framework (Sun et al., 2011) systematically addresses the joint estimation of regression coefficients and noise level in high-dimensional models through a tractable, iterative convex minimization approach. By incorporating a scale-adaptive penalty and establishing strong theoretical guarantees—including oracle inequalities and asymptotic normality—the methodology sets a rigorous foundation for adaptive and reliable sparse regression in modern data analytics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Sparse Linear Regression Model.