Papers
Topics
Authors
Recent
2000 character limit reached

Normalized Mean Square Error (NMSE)

Updated 2 January 2026
  • NMSE is a dimensionless metric that measures the fidelity of an estimator by normalizing the mean square error with the signal power or variance.
  • It is widely used in signal processing, statistical estimation, and communications, including applications like generalized LASSO, channel estimation, and metasurface design.
  • Analytical and empirical studies of NMSE guide system optimization, highlighting trade-offs in power settings, hardware impairments, and signal structure.

Normalized mean square error (NMSE) is a dimensionless performance metric that quantifies the relative @@@@1@@@@ of an estimator, system, or algorithm by measuring mean square deviation and normalizing it by an appropriate power or variance term. It is prevalent in signal processing, statistical estimation, compressed sensing, machine learning, and physical layer design for communication systems.

1. Formal Definition and Variants

NMSE is universally defined as the ratio of mean-squared error (MSE) between the true quantity and its estimate to a normalization term, typically the power or variance of the reference/target signal. Given a target gg and its estimate g^\hat{g}, the general form is: NMSE=E[gg^2]E[g2]\mathrm{NMSE} = \frac{\mathbb{E}[\|g - \hat{g}\|^2]}{\mathbb{E}[\|g\|^2]} In dB, this is often reported as 10log10(NMSE)10\log_{10}(\mathrm{NMSE}).

In system identification and communications, NMSE also appears as

NMSE=E[yyo,2]E[yo,2]\mathrm{NMSE}_\ell = \frac{\mathbb{E}[|y_\ell - y_{o,\ell}|^2]}{\mathbb{E}[|y_{o,\ell}|^2]}

where yy_\ell is the system's output and yo,y_{o,\ell} the ideal reference (Händel et al., 2019).

In statistics and learning theory, the normalized square error (NSE) is sometimes used, especially in LASSO and similar estimators, where normalization is performed w.r.t. the noise variance and potentially the number of measurements,

NMSE=E[xx02]mσ2\mathrm{NMSE} = \frac{\mathbb{E}[\|x^* - x_0\|^2]}{m \sigma^2}

with x0x_0 as the true signal, xx^* as the estimate, mm the measurement dimension, and σ2\sigma^2 the noise variance (1311.0830).

2. NMSE in Estimation and Detection Algorithms

NMSE is employed to evaluate the quality of estimators in high-dimensional statistics and communications.

  • Generalized LASSO: For y=Ax0+zy = Ax_0 + z, with ARm×nA\in\mathbb{R}^{m\times n} and Gaussian noise zN(0,σ2Im)z\sim\mathcal{N}(0,\sigma^2 I_m), the NMSE of an estimator xx^* is NMSE=E[xx02]mσ2\mathrm{NMSE} = \frac{\mathbb{E}[\|x^* - x_0\|^2]}{m\sigma^2} (1311.0830).
  • Channel Estimation: In MIMO and related systems, channel estimation algorithms such as LS, MMSE, and data-aided MMSE compare NMSE to benchmark estimation accuracy under different scenarios. For a true channel gg and estimator g^\hat{g},

NMSE=E[gg^2]E[g2]\mathrm{NMSE} = \frac{\mathbb{E}[\|g - \hat{g}\|^2]}{\mathbb{E}[\|g\|^2]}

is used directly for comparison (Liu et al., 2018).

NMSE provides a fair metric for algorithms operating at different power regimes or using disparate reference signals, and enables meaningful performance benchmarking across scenarios.

3. Analytical Expressions and Model Dependencies

Analytical NMSE expressions often capture rich dependencies on noise, signal structure, system nonlinearity, and interference.

  • MIMO Transmitter with Crosstalk:

In the context of nonlinear MIMO transmitters with backward crosstalk (Händel et al., 2019), the per-branch NMSE for distorted output yy and ideal output yoy_o is expressed as

NMSE=eγ2E[x2]\mathrm{NMSE}_\ell = \frac{e_{\ell\ell}}{\gamma_\ell^2 \mathbb{E}[|x_\ell|^2]}

with ee_{\ell\ell} the diagonal of error covariance E=E[(yyo)(yyo)H]E = \mathbb{E}[(y - y_o)(y - y_o)^H]. Explicit formulas for e11,e22e_{11}, e_{22} show NMSE as convex, third-order polynomials in input power, encapsulating effects of PA nonlinearity, crosstalk, and noise.

  • Generalized LASSO NMSE:

For estimators defined via

x^argminx{yAx2+λf(x)}\hat{x} \in \arg\min_x \{\|y - Ax\|_2 + \lambda f(x)\}

the asymptotic NMSE is

NMSED(λ)mD(λ)\mathrm{NMSE} \approx \frac{D(\lambda)}{m - D(\lambda)}

where D(λ)D(\lambda) encapsulates signal structure via expected squared Gaussian distance to the scaled subdifferential of the regularizer (1311.0830).

  • Data-aided MMSE Channel Estimation:

For DA-MMSE, the NMSE is characterized (in dB) as

NMSEkDAMMSE=10log10(11+ρkDAβkM)\mathrm{NMSE}_k^{DA-MMSE} = 10 \log_{10} \left( \frac{1}{1 + \rho_k^{DA} \beta_k^M} \right)

where ρkDA\rho_k^{DA} aggregates contributions from pilot SNR, data energy, and symbol BER, providing a direct link between estimator fidelity and system configuration (Liu et al., 2018).

  • SIM-assisted Channel Estimation in Rician Fading:

For a multi-user system with stacked intelligent metasurfaces (SIM) (Papazafeiropoulos et al., 18 Feb 2025), the closed-form per-user NMSE is:

NMSEk=1κkqkΦ1hˉk2+qk2hˉkHΦ1HRkΦ1hˉkqktr{Φ1RkΦ1H}+σ2τρNt\mathrm{NMSE}_k = 1 - \frac{\kappa_k q_k \|\Phi^1 \bar{h}_k\|^2 + q_k^2 \bar{h}_k^H \Phi^{1H} R_k \Phi^1 \bar{h}_k}{q_k\,\mathrm{tr}\{\Phi^1 R_k \Phi^{1H}\} + \frac{\sigma^2}{\tau\rho} N_t}

where key parameters are the Rician K-factor κk\kappa_k, large-scale fading qk=βk(1+κk)q_k=\beta_k(1+\kappa_k), and meta-surface transformations Φ1\Phi^1.

4. NMSE Optimization and Theoretical Properties

Closed-form optimization of NMSE is tractable in several key scenarios.

  • Power Back-off for Distortion Minimization: For MIMO transmitters affected by nonlinearities and crosstalk, minimal worst-case NMSE across branches is achieved by solving for the input power xx that minimizes max{NMSE1(x),NMSE2(x)}\max\{\mathrm{NMSE}_1(x), \mathrm{NMSE}_2(x)\}. This is found via roots of convex, third-order polynomials in xx (Händel et al., 2019).
  • Phase-Shift Design in Metasurfaces: In SIM-assisted channel estimation, the phase shifts of each metasurface layer are optimized to minimize the average NMSE across users, using projected gradient-descent, exploiting the fact that NMSE is differentiable with respect to these parameters (Papazafeiropoulos et al., 18 Feb 2025).
  • Sample Complexity and Geometry: In generalized LASSO, NMSE is governed by geometric summary parameters (e.g., statistical dimension D(C)D(C)) capturing the "effective size" of regularizer descent cones. Smaller D(C)D(C) permits recovery at lower dimensions mm and incurs lower NMSE (1311.0830). The key theoretical property is concentration: the high-dimensional NMSE converges to its analytical prediction as (m,nm, n \to \infty).

Empirical studies show consistent NMSE trends across application domains.

  • Effect of System Nonidealities: Even modest backward crosstalk in MIMO transmitters induces a non-trivial NMSE optimum shifted below the classical compression-only optimum, requiring practitioners to judiciously select transmit power for balanced NMSE and spectral efficiency (Händel et al., 2019).
  • Role of Training Resources and SNR: Increasing pilot or data power, length, or reducing BER monotonically reduces NMSE in channel estimation. DA-MMSE outperforms both LS and pilot-only MMSE, particularly at low pilot power and high data reliability (Liu et al., 2018). For SIMs, more metasurface layers yield diminishing NMSE improvement after L6L\approx6, and LoS presence (higher K-factor) consistently lowers NMSE (Papazafeiropoulos et al., 18 Feb 2025).
  • Estimator Comparison: Table 1 summarizes the NMSE expressions for typical estimators in communications:
Estimator NMSE Expression Key Parameterization
LS (pilot only) 10log10[1/(ρConβkM)]10\log_{10}[1/(\rho^{Con}\beta_k^M)] Pilot SNR ρCon\rho^{Con}
MMSE (pilot only) 10log10[1/(1+ρConβkM)]10\log_{10}[1/(1+\rho^{Con}\beta_k^M)] Pilot SNR ρCon\rho^{Con}
DA-MMSE 10log10[1/(1+ρkDAβkM)]10\log_{10}[1/(1+\rho_k^{DA}\beta_k^M)] SNR-like ρkDA\rho_k^{DA} incl. data/BER
SIM-MMSE 1LoS+NLoS TermsTotal Power + Noise1 - \frac{\text{LoS+NLoS Terms}}{\text{Total Power + Noise}} SIM responses, Rician factor, SNR

6. Key Applications and Advanced Use Cases

NMSE is a central metric in several advanced research areas.

  • Hardware-Impaired MIMO Links: Used to optimize power back-off and predistortion strategies where PA nonlinearity and isolation constraints interplay (Händel et al., 2019).
  • Sparse Signal Recovery: Asymptotic NMSE formulations guide the design and benchmarking of convex recovery algorithms under structured priors (1311.0830).
  • Data-Driven Channel Acquisition: NMSE expressions incorporating decoded data error rates set operational guidelines for scheduling, resource allocation, and reliability in heterogeneous networks (Liu et al., 2018).
  • Metasurface-Aided Massive MIMO: The NMSE metric drives the design and deployment of intelligent surface architectures, phase control algorithms, and per-layer resource allocation under composite fading models (Papazafeiropoulos et al., 18 Feb 2025).

These uses highlight NMSE's centrality as a link between analytical tractability, hardware design, and practical trade-off analysis in communication and statistical estimation.

7. Limitations and Interpretative Considerations

While NMSE is widely used due to its normalization and interpretability, certain caveats apply:

  • In high SNR or low-error regimes, NMSE differences across algorithms may become marginal and numerically sensitive.
  • In the presence of strongly structured signals, the interpretation of normalization must account for intrinsic signal power variability.
  • NMSE as a single-number summary may obscure performance variability over feature subspaces or system dimensions.
  • In channel and system design, NMSE minimization should be contextualized within broader requirements, e.g., spectral efficiency, robustness, and hardware overhead.

Nevertheless, NMSE remains a critical and theoretically grounded metric for algorithmic comparison and system optimization across high-dimensional signal processing and communications research.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Normalized Mean Square Error (NMSE).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube