Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 105 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 45 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 108 tok/s
GPT OSS 120B 473 tok/s Pro
Kimi K2 218 tok/s Pro
2000 character limit reached

Mean Relative Accuracy (MRA)

Updated 23 August 2025
  • Mean Relative Accuracy is defined as the mean of the logarithmic ratio (ln Q) between predicted and actual values, providing a symmetric metric for forecast evaluation.
  • It offers significant bias reduction and scale invariance by addressing the limitations of measures like MAPE in heterogeneous and multiplicative error environments.
  • The use of MRA in regression leads to unbiased, geometrically-targeted predictions and serves as a useful diagnostic for systematic prediction errors.

Mean Relative Accuracy (MRA) is a term that, in predictive modeling and forecasting, denotes a class of metrics intended to evaluate the relative accuracy of predictions against actual values. In the research literature, this general concept is often instantiated in the form of specific, well-defined accuracy indices—most notably, the mean of the logarithmic accuracy ratio lnQ=ln(y^/y)\ln Q = \ln(\hat{y}/y), as distinct from more common measures like the mean absolute percentage error (MAPE). It is essential to recognize that while “MRA” is sometimes ambiguous in the literature (alternatively referring to “multiresolution analysis” in harmonic analysis and wavelet theory), in model evaluation, it serves to measure relative prediction fidelity with specific statistical and practical properties.

1. Definition and Mathematical Formulation

Mean Relative Accuracy in the context of model selection and prediction is formally expressed through the logarithmic ratio of prediction to observed values:

lnQ=ln(y^i/yi)\ln Q = \ln(\hat{y}_i / y_i)

where y^i\hat{y}_i is the predicted value and yiy_i is the observed/actual value for instance ii. The principal summary statistics derived from this are the mean log accuracy ratio (mean of lnQ\ln Q), the mean absolute log accuracy ratio (mean of lnQ|\ln Q|), and the mean squared log accuracy ratio (mean of [lnQ]2[\ln Q]^2) (Tofallis, 2021).

Unlike measures such as MAPE,

MAPE=1ni=1nyiy^iyi×100%,\mathrm{MAPE} = \frac{1}{n} \sum_{i=1}^n \left|\frac{y_i - \hat{y}_i}{y_i}\right| \times 100\%,

which is inherently asymmetric, the mean of lnQ\ln Q provides a symmetric, multiplicative, and scale-invariant comparison that is particularly relevant in heteroscedastic and multiplicative error settings.

2. Statistical Properties and Theoretical Advantages

The central feature of the lnQ\ln Q-based Mean Relative Accuracy is symmetry:

ln(y^/y)=ln(y/y^),\ln(\hat{y}/y) = -\ln(y/\hat{y}),

ensuring that over-prediction and under-prediction are penalized equally in log-space. This resolves the known under-prediction bias found in MAPE, where downward errors are bounded (as predictions cannot go below zero) but upward errors are virtually unbounded. The natural symmetry of lnQ\ln Q means that, for example, an overestimation by 25% is directly counterbalanced by an underestimation by 20% (since 1.25×0.8=11.25 \times 0.8 = 1).

In regression estimation, using least squares minimization over lnQ\ln Q:

L=i[ln(y^i)ln(yi)]2L = \sum_i [\ln(\hat{y}_i) - \ln(y_i)]^2

leads (when optimizing over a constant) to predictions that exactly match the geometric mean of the observed values:

Y^=exp(1nilnyi).\hat{Y} = \exp\bigg(\frac{1}{n} \sum_i \ln y_i\bigg).

Thus, in settings with multiplicative noise or heteroscedasticity, the Mean Relative Accuracy naturally yields geometrically unbiased predictions, as opposed to standard approaches that target the arithmetic mean (Tofallis, 2021).

3. Practical Implications for Model Selection and Estimation

Employing Mean Relative Accuracy as the criterion for model comparison and parameter fitting imparts several practical benefits:

  • Bias Reduction: ln Q-based metrics do not systematically prefer under- or over-predictive models, in contrast to MAPE, which is proven to systematically select models that under-predict.
  • Robustness to Scale: Because errors are measured as ratios rather than differences, models evaluated under ln Q are robust to changes in the scale of the dependent variable—a crucial property in domains with wide-ranging values or when multiplicative noise dominates.
  • Interpretability and Diagnostics: A mean of lnQ\ln Q equal to zero implies that, in aggregate, the predictions’ geometric mean matches that of the actuals. Significant deviations from zero can serve as diagnostics for systemic bias in the forecasting model.

4. Empirical Performance Compared to Traditional Indices

Extensive Monte Carlo simulations demonstrate the superiority of the ln Q (mean log ratio) measure in several regimes:

  • Heteroscedastic Data: When error variance increases with the value of the target variable, ln Q-based selection identifies the true model more frequently than MAPE.
  • Multiplicative Error Models: In settings where y=f(x)ϵy = f(x)\cdot \epsilon with log-normally distributed ϵ\epsilon, minimizing the mean squared log accuracy ratio consistently selects the correct model, even as noise increases, outperforming both MAPE and other alternatives such as symmetric MAPE (SMAPE) and logarithmic standard deviation (LSD).
  • Robustness to Outliers: Geometric mean-targeting objective functions (mean log ratios) are inherently less sensitive to outliers than arithmetic mean-targeting functions due to the properties of the logarithm and the arithmetic–geometric mean inequality (Tofallis, 2021).
Metric Bias with Multiplicative Noise Model Selection Consistency Outlier Robustness
MAPE High (toward under-prediction) Low at high noise Poor
ln Q (MRA) None (symmetrical) High Good
SMAPE/LSD Intermediate Intermediate Context-dependent

5. Connections to Error Modeling and Broader Statistical Context

The theoretical justification for Mean Relative Accuracy arises via error structure modeling:

  • In heteroscedastic regression, the classic model y=f(x)ϵy = f(x) \cdot \epsilon, with ϵ\epsilon log-normally distributed, naturally motivates least squares on logs:

mini[logyilogy^i]2\min \sum_i [\log y_i - \log \hat{y}_i]^2

  • MRA, in this sense, is a direct consequence of log-transformation under multiplicative error models leading to estimators that are unbiased for the geometric mean of yy.

A plausible implication is that in any domain where noise behavior is better described by multiplicative rather than additive error, Mean Relative Accuracy based on lnQ\ln Q provides a theoretically correct and statistically robust measure for model fitting and selection. This is frequent in financial, biological, and engineering applications.

6. Limitations and Diagnostic Use

While Mean Relative Accuracy overcomes bias and asymmetry inherent in percentage error metrics, its use presupposes all actuals and predictions are strictly positive—since lnQ\ln Q is undefined for nonpositive yiy_i or y^i\hat{y}_i. In practice, this restricts its application to domains where negative or zero responses do not occur or can be safely excluded.

When mean ln Q substantially deviates from zero, this flags systematic over- or under-prediction; for example, a positive mean indicates over-prediction on the geometric scale, and vice-versa. This diagnostic property is absent from conventional absolute percentage metrics.

7. Summary

Mean Relative Accuracy, particularly through the mean of the logarithmic accuracy ratio, provides a statistically principled, symmetric, and robust alternative to widely used but biased metrics such as MAPE for measuring forecast accuracy and supporting model selection. By targeting the geometric mean of actuals, it aligns estimation procedures with the structure of multiplicative and heteroscedastic error models, yielding unbiased parameter fits and more reliable model rankings under such conditions. Simulation studies robustly support its superiority in environments with multiplicative noise and wide-ranging data, consolidating its place in modern predictive modeling and accuracy assessment (Tofallis, 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)