Unregularized Least Squares Method
- Unregularized Least Squares Method is a linear regression approach that estimates parameters by minimizing the sum of squared residuals without adding penalty terms.
- It employs algorithmic innovations like LU factorization and simplified Gram–Schmidt orthogonalization to compute coefficients efficiently and enhance numerical stability.
- In high-dimensional settings, generalized estimators such as LAT and RAT enable reliable variable screening and support recovery under mild conditions without inducing shrinkage bias.
The unregularized least squares method, often referred to as ordinary least squares (OLS) or linear least squares (LLS), represents a foundational approach in statistical inference and data fitting, particularly for linear models. This method seeks parameter estimates that minimize the residual sum of squares without introducing explicit regularization (penalty) terms. Across classical and high-dimensional regimes, as well as in algorithmic innovations, OLS and its unregularized generalizations remain central to statistical theory and practice.
1. Mathematical Formulation and Classical Properties
Given a response vector and predictor matrix , the unregularized least squares estimator solves:
When is invertible (typically and is full-rank), the unique minimizer is:
This estimator is unbiased, achieves minimum variance among linear unbiased estimators (the Gauss–Markov theorem), and has an explicitly computable covariance when errors are homoskedastic.
In the classical regime, properties such as support recovery and estimator error bounds are well understood. The method, however, encounters challenges when , because becomes singular and not invertible.
2. Algorithmic and Structural Innovations
Recent advances provide alternative characterizations and algorithmic solutions that avoid matrix inversion and explicit normalization, expanding the toolkit for unregularized least squares estimation.
LU Factorization without Inversion
A constructive approach decomposes the Gram matrix via LU factorization:
If is the upper-triangular factor, individual coefficients can be calculated iteratively through back substitution:
- For ,
- For ,
where are computed using inner products with the dependent variable . This approach circumvents matrix inversion, increases numerical stability for ill-conditioned problems, and allows selective coefficient computation (Madar et al., 2023).
Simplified Gram–Schmidt Orthogonalization
A related, normalization-free Gram–Schmidt procedure (“SGSO”) yields orthogonal vectors :
This yields a triangular matrix , and the OLS coefficients can be recovered as:
This iterative, projection-based procedure avoids explicit normalization and enables efficient algorithmic implementation (Madar et al., 2023).
3. High-Dimensional Generalizations without Penalization
In high-dimensional settings (), classical OLS breaks down due to singularity of . A generalized estimator, motivated by ridge regression, is constructed as follows:
where inversion is performed on the matrix , which can be full-rank even for .
This “in-projection” estimator captures the component of in the row space of , multiplied back into predictor space. Notably, the entries of have proven effective for variable screening and selection:
- In sparse models, the magnitude separates strong from weak predictors with high probability, as formalized in thresholding inequalities.
Two three-stage, non-iterative algorithms are built on this estimator [Editor’s term: “generalized unregularized OLS”]:
- LAT (Least-squares Adaptive Thresholding): (i) Standardize data, compute , select top variables; (ii) fit OLS in selected submodel, hard threshold small coefficients; (iii) refit and finalize model.
- RAT (Ridge Adaptive Threshold): Similar, but uses ridge regression in stages 2–3 to improve conditioning (Wang et al., 2015).
Compared to -based methods, these approaches:
- Do not require penalty tuning.
- Avoid shrinkage-induced bias.
- Rely on mild conditions for support recovery (finite noise variance, manageable condition number).
- Offer non-iterative, parallelizable algorithms.
4. Alternative Geometric Interpretations: Least Squares as Random Walks
A geometric and statistical reinterpretation frames unregularized LLS in terms of the net area annihilation of a “data walk.” For equispaced points , define mean-adjusted values and their cumulative sum (data walk) , with .
The trend in the data is identified as the slope that zeroes the signed area under the data walk:
For a linear trend , , with . Thus the area-balancing slope estimate is:
It is shown that this expression for is algebraically identical to the conventional LLS slope for uniform . This equivalence reveals least squares as the “detreading” operation that balances a cumulative walk, akin to setting the net area of a Brownian bridge to zero (Kostinski et al., 26 Mar 2025).
This geometric perspective:
- Provides an intuitive, visual framework for understanding LLS in terms of random walks.
- Is robust to noise distribution (Gaussian or otherwise), being purely based on summations.
- Admits reinterpretation of standard error and statistical significance in terms of random walk theory.
- Invites the use of stochastic process tools for further statistical analysis.
5. Computational and Theoretical Properties
The unregularized least squares estimators retain several computational and theoretical advantages:
- Non-iterative Computation: LU factorization and SGSO avoid inversion and enable efficient back or forward substitution.
- Parallelization: Matrix multiplications and screening steps in LAT/RAT are suitable for parallel computing architectures (Wang et al., 2015).
- Numerical Stability: Avoiding explicit inversion mitigates the effects of ill-conditioning in design matrices (Madar et al., 2023).
- Support Recovery and Error Bounds: In high-dimensional settings, under suitable conditions (mild assumptions on condition number and noise variance), the LAT and RAT algorithms achieve:
- Reliable strong/weak signal separation in screening.
- Rate-optimal recovery of true support.
- Error bounds such as -error scaling with .
Numerical experiments in diverse settings (independent predictors, compound symmetry, group structures, real data) confirm that generalized OLS-based three-stage methods yield competitive RMSE and improved computational times over penalization-based estimators.
6. Practical Applications and Model Selection
Unregularized least squares methodologies, both classical and high-dimensional, are applied extensively in regression analysis, signal recovery, and exploratory screening when explicit interpretability and unbiasedness are desired.
Key practical features include:
- Flexibility in high-dimensional settings via generalized estimators (Wang et al., 2015).
- The ability to compute coefficients directly, or selectively, using LU or SGSO approaches (Madar et al., 2023).
- Geometric and visualization-aiding interpretations for trend removal in time series and sequential data analysis (Kostinski et al., 26 Mar 2025).
- Model selection via ranking and thresholding, augmented by data-adaptive thresholds (e.g., using estimated noise variance and log-factors reflecting multiple testing corrections).
These advances enable efficient, interpretable model fitting in analytic, computational, and applied contexts without the need for regularization.
7. Comparison with Penalized and Regularized Approaches
Unregularized least squares differs fundamentally from penalized frameworks (such as lasso, SCAD, and ridge regression):
- No penalty is imposed; coefficient bias due to shrinkage is thus avoided.
- LAT/RAT algorithms sidestep the need for careful penalty parameter tuning.
- Fewer probabilistic assumptions are required (finite noise variance suffices; sub-Gaussianity and strong irrepresentability are not necessary).
- Theoretical support recovery and consistency are delivered under milder conditions.
- Computation is rooted in linear algebraic primitives suited for high-throughput environments.
A plausible implication is that in problems where explicit sparsity penalization may introduce an undesirable bias or where tuning is impractical, these unregularized frameworks offer efficiency, clarity, and strong support recovery, as reflected in their empirical and theoretical performance (Wang et al., 2015, Madar et al., 2023, Kostinski et al., 26 Mar 2025).