Quantile Regression Forests (QRF)
- Quantile Regression Forests (QRF) are ensemble methods that nonparametrically estimate the entire conditional distribution instead of just the mean.
- QRF employs weighted aggregations from randomized trees and proximity measures to extract conditional quantiles for uncertainty quantification and probabilistic forecasting.
- The method is applied in diverse fields such as anomaly detection, survival analysis, and extreme value prediction while offering formal statistical guarantees and computational efficiency.
Quantile Regression Forests (QRF) are ensemble methods that extend random forests to construct nonparametric estimates of the entire conditional distribution of a response variable, not just the conditional mean. QRF provide a unified framework to estimate all conditional quantile levels with a single model for regression, probabilistic forecasting, uncertainty quantification, anomaly detection, and survival analysis, while inheriting robustness and scalability properties of classical regression forests (Li et al., 5 Aug 2024, Li et al., 2023, Papacharalampous et al., 2023, Taillardat et al., 2017, Zhou et al., 16 Oct 2024). The QRF methodology offers both empirical CDF outputs and high-resolution prediction intervals, underpinned by formal statistical theory, extensions for censored and time series data, targeted tuning for coverage, and proximity-based generalizations for computational efficiency.
1. Methodological Foundation
At the core of Quantile Regression Forests is the replacement of the point-estimate aggregation in regression forests by a weighted, nonparametric estimate of the conditional cumulative distribution function (CDF). For a training set with and , and a new query point , the QRF computes weights , where
with denoting the set of training indices whose predictors fall in the same terminal node as in tree (Li et al., 5 Aug 2024, Li et al., 2023, Taillardat et al., 2017).
The conditional CDF estimator is
and the conditional quantile at level is obtained by inversion: This approach enables nonparametric estimation of the full conditional distribution with a single ensemble model.
2. Algorithmic Structure and Variants
Tree Construction and Weighting
QRF grows randomized regression trees on bootstrap replicates of training data. At each node, splitting proceeds by minimizing the post-split squared error, as in standard CART, or by more tailored criteria (e.g., gradient-quantile splits in gradient forests) (Taillardat et al., 2017). Each tree assigns weights inversely proportional to leaf size for observations in the same terminal node as (Li et al., 5 Aug 2024).
Proximity-based Quantile Estimation
A recent generalization replaces the aggregation of per-tree leaf weights with a direct computation of sample-wise proximities: These proximities serve as learned local similarity kernels that can be normalized and used analogously to standard QRF weights for CDF estimation and quantile extraction. The computational structure makes this approach advantageous for batch quantile queries (Li et al., 5 Aug 2024).
Extensions for Censoring, Multivariate, and Time Series Data
Censored QRF variants replace the outcome by observed pairs (with indicating event/censoring) and modify the estimation equation using a local Kaplan–Meier or Beran estimator for the conditional survival function (Li et al., 2020, Zhou et al., 16 Oct 2024).
In the multivariate setting, QRF-derived weights are combined with center-outward optimal transport to recover (conditional) vector quantile contours (Felix et al., 2023).
Generalized and time-series QRF (GRF, tsQRF) adapt tree-splitting to preserve "honesty" and other regularity conditions required for consistency under dependent data (Shiraishi et al., 2022).
3. Theoretical Guarantees and Statistical Properties
Quantile Regression Forests are consistent estimators of the conditional CDF under suitable regularity (trees grown deeply enough, honesty, randomization, Lipschitz regression function, etc.) (Li et al., 2023, Shiraishi et al., 2022, Nakamura et al., 28 Nov 2025). Uniform consistency is established for tsQRF under -mixing sequences and GRF splitting (Shiraishi et al., 2022).
In high dimensions and large sample settings, bias-variance decomposition reveals a phase transition in statistical inference for variable importance via pinball loss risks, controlled by the forest subsampling rate (with ) (Nakamura et al., 28 Nov 2025). For , the QRF quantile estimator is asymptotically normal at rate; for , bias dominates and inference without analytic correction is invalid.
Censored QRF and GCQRF provide consistency for quantile estimation under right-censoring, using weighted quantile loss minimization adjusted for censoring via survival probabilities (Li et al., 2020, Zhou et al., 16 Oct 2024). Weak convergence is described via incomplete infinite-degree U-processes for GCQRF (Zhou et al., 16 Oct 2024).
4. Tuning, Efficiency, and Computational Complexity
Default QRF parameter settings often yield competitive performance, but targeted tuning is important for prediction interval coverage and efficiency (Berkowitz et al., 2 Jul 2025). The quantile coverage loss (QCL) objective directly optimizes the (out-of-bag) difference between empirical quantile coverage and the nominal level, substantially reducing both bias and MSE of coverage in repeated experiments compared to MSE or c-index tuning (Berkowitz et al., 2 Jul 2025).
Computationally, QRF inference cost is per test point, where is the number of quantile levels evaluated. Proximity-based QRF incurs upfront per query but enables efficient extraction of arbitrarily many quantile levels and reuse across queries (Li et al., 5 Aug 2024).
For extreme quantile levels, standard QRF cannot extrapolate beyond observed outcomes; extensions using local likelihood estimation of generalized Pareto parameters (ERF) address this limitation by fitting weighted EVT models using QRF weights (Gnecco et al., 2022, Taillardat et al., 2017).
5. Applications
QRF have been adopted in multiple domains:
- Probabilistic forecasting of precipitation using satellite and sensor data, with rigorous assessment via quantile scoring rules and continuous ranked probability score (CRPS). QRF yields calibrated quantiles, competitive with and sometimes superior to ensemble model output statistics (EMOS) and boosting methods in spatial prediction, but lags LightGBM in speed and marginally in aggregate score (Papacharalampous et al., 2023, Taillardat et al., 2017).
- Anomaly detection via context-specific QRF for explainable contextual anomaly scoring, leveraging locally estimated conditional quantiles and interval widths (Li et al., 2023).
- Survival analysis with right-censored event times and high-dimensional predictors, using GCQRF and crf for robust, nonparametric estimation of quantile processes and variable importance (Zhou et al., 16 Oct 2024, Li et al., 2020).
- Extreme value prediction, combining QRF proximity weights with local GPD likelihood for tail risk estimation (Gnecco et al., 2022).
- Multivariate quantiles via optimal transport mapped from QRF-derived empirical measures (Felix et al., 2023).
- Time series quantile regression in stationary and nonlinear autoregressive models, with tsQRF providing robust, volatility-sensitive estimates (Shiraishi et al., 2022).
- Financial prediction intervals and risk estimation, including daily bond volume and asset volatility prediction, where proximity-based QRF yields tighter intervals and more efficient coverage (Li et al., 5 Aug 2024).
6. Limitations and Practical Considerations
QRFs are fundamentally restricted to the range of observed training responses and do not extrapolate for extreme quantiles unless combined with EVT or parametric tail models (Taillardat et al., 2017, Gnecco et al., 2022). Complexity–interpretability trade-offs arise: QRFs provide rich, flexible distributional estimates, but the ensemble structure is "black-box" and variable-importance inference becomes subtle in high-dimensional or highly correlated data (Nakamura et al., 28 Nov 2025, Papacharalampous et al., 2023). Variable importance estimation requires attention to bias regimes and potential analytic correction (Nakamura et al., 28 Nov 2025).
Hyperparameter tuning aligned with quantile estimation objectives (coverage, loss) is essential for predictive validity and interval width minimization (Berkowitz et al., 2 Jul 2025). For high-resolution or repeated quantile extraction, proximity-based QRF or batch strategies are preferable for computational efficiency (Li et al., 5 Aug 2024).
7. Table: Core QRF Formulas and Use-cases
| Context | Key Formula | Papers |
|---|---|---|
| Conditional quantile | (Li et al., 5 Aug 2024) | |
| Censored QRF | (Li et al., 2020) | |
| Proximity-based QRF | as similarity weights for all quantile queries | (Li et al., 5 Aug 2024) |
| Variable importance | (Nakamura et al., 28 Nov 2025) | |
| EVT extension (ERF) | and tail quantile extrapolation | (Gnecco et al., 2022) |
| Targeted tuning (QCL) | (Berkowitz et al., 2 Jul 2025) |
References
- "Quantile Regression using Random Forest Proximities" (Li et al., 5 Aug 2024)
- "Asymptotic Theory and Phase Transitions for Variable Importance in Quantile Regression Forests" (Nakamura et al., 28 Nov 2025)
- "Explainable Contextual Anomaly Detection using Quantile Regression Forests" (Li et al., 2023)
- "Uncertainty estimation of machine learning spatial precipitation predictions from satellite data" (Papacharalampous et al., 2023)
- "Global Censored Quantile Random Forest" (Zhou et al., 16 Oct 2024)
- "Extremal Random Forests" (Gnecco et al., 2022)
- "Censored Quantile Regression Forest" (Li et al., 2020)
- "Targeted tuning of random forests for quantile estimation and prediction intervals" (Berkowitz et al., 2 Jul 2025)
- "Forest-based methods and ensemble model output statistics for rainfall ensemble forecasting" (Taillardat et al., 2017)
- "Some novel aspects of quantile regression: local stationarity, random forests and optimal transportation" (Felix et al., 2023)
- "Time series quantile regression using random forests" (Shiraishi et al., 2022)