Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 172 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Quantile Regression Neural Network

Updated 26 October 2025
  • Quantile Regression Neural Network is a deep learning model that estimates conditional quantiles to capture comprehensive risk profiles and distributional uncertainty.
  • It employs a multi-branch architecture with convolutional and LSTM layers to efficiently model high-frequency, structured financial data.
  • The framework overcomes quantile crossing by using post hoc rearrangement, enabling robust prediction intervals and improved risk management.

A Quantile Regression Neural Network (QRNN) is a deep learning model specifically designed to estimate conditional quantiles of a target variable, rather than just its conditional mean. This modeling paradigm provides a more complete characterization of the target distribution, capturing asymmetric risks and distributional uncertainty. QRNNs are particularly advantageous in domains where prediction intervals and robust risk assessment are necessary, such as financial forecasting, survival analysis, and other areas with heavy-tailed, non-Gaussian, or heteroscedastic data.

1. Overview and Formalism

A QRNN extends classical quantile regression by leveraging deep architectures to model the conditional quantile function r^(τ)(t)\hat{r}^{(\tau)}(t) for each quantile level τ(0,1)\tau \in (0,1). For each observation r(t)r(t) at time tt (or more generally, target variable yy for input xx), the network is trained using the quantile loss: L(τ)(r,r^(τ))=t:r(t)<r^(τ)(t)(τ1)r(t)r^(τ)(t)+t:r(t)>r^(τ)(t)τr(t)r^(τ)(t)L_{(\tau)}(r, \hat{r}^{(\tau)}) = \sum_{t: r(t) < \hat{r}^{(\tau)}(t)} (\tau - 1) |r(t) - \hat{r}^{(\tau)}(t)| + \sum_{t: r(t) > \hat{r}^{(\tau)}(t)} \tau |r(t) - \hat{r}^{(\tau)}(t)| This so-called "pinball" (or check) loss penalizes over- and underestimates asymmetrically according to τ\tau, ensuring the estimated function approximates the true conditional quantile: P(r(t)r^(τ)(t))τP(r(t) \leq \hat{r}^{(\tau)}(t)) \approx \tau.

For τ=0.5\tau=0.5, this reduces to the mean absolute error (MAE).

2. Architecture and Multi-Quantile Design

The canonical QRNN architecture is modular and multitask, with a design optimized for high-frequency, structured data such as Limit Order Books (LOBs) in finance (Zhang et al., 2019). The main architectural elements include:

  • Shared feature extraction: Raw input data (such as prices and volumes at multiple levels in the LOB) are processed by a convolutional block to extract spatially localized patterns describing market microstructure.
  • Temporal modeling: Extracted features are further processed by one or more Long Short-Term Memory (LSTM) layers to encode temporal dependencies, essential for time-series data with high autocorrelation and volatility clustering.
  • Branched outputs: To efficiently estimate multiple quantiles (e.g., 0.25, 0.5, 0.75) for multiple targets (e.g., long and short future returns), shared lower layers are followed by several LSTM branches. Each branch is responsible for estimating a specific quantile and target pair, and each is associated with its own quantile loss. For QQ targets and KK quantiles, the model outputs Q×KQ \times K predictions with a total loss equal to the sum of the individual quantile losses.

This multi-output, parameter-sharing strategy avoids the inefficiency of training separate models for each quantile.

3. Quantile Crossing and Rearrangement

A notable statistical challenge in QRNNs is quantile crossing, where the estimated τ2\tau_2-quantile r^(τ2)\hat{r}^{(\tau_2)} may be lower than the τ1\tau_1-quantile for τ2>τ1\tau_2 > \tau_1, violating monotonicity.

To enforce non-crossing, a rearrangement method (following Chernozhukov et al., 2010) is applied post hoc. This method reorders the predicted quantile values so that the final set of quantiles is non-decreasing across τ\tau. Formally, for predicted quantiles {r^(τ1)(t),r^(τ2)(t),}\left\{ \hat{r}^{(\tau_1)}(t), \hat{r}^{(\tau_2)}(t), \ldots \right\} at each time tt, the values are sorted to enforce the required monotonicity: r^(τ1)(t)r^(τ2)(t)r^(τK)(t),τ1<τ2<<τK\hat{r}^{(\tau_1)}(t) \leq \hat{r}^{(\tau_2)}(t) \leq \dots \leq \hat{r}^{(\tau_K)}(t), \quad \forall \, \tau_1 < \tau_2 < \dots < \tau_K

4. Forecast Combination and Uncertainty Quantification

Combining quantile estimates enables the construction of robust point forecasts and uncertainty intervals. Instead of relying solely on the median (0.5 quantile), the model may output a weighted combination: r^(t)=τSπ(τ)r^(τ)(t)\hat{r}(t) = \sum_{\tau \in \mathcal{S}} \pi^{(\tau)} \hat{r}^{(\tau)}(t) where S\mathcal{S} is the set of quantile levels and π(τ)\pi^{(\tau)} are weights (fixed or optimized through constrained regression). By considering the interval [r^(τl)(t),r^(τu)(t)][\hat{r}^{(\tau_l)}(t), \hat{r}^{(\tau_u)}(t)] for lower and upper quantiles, the model provides calibrated measures of uncertainty suitable for risk-sensitive applications.

This strategy leads to prediction intervals that are adaptive to changes in distribution shape, accounting for heavy tails and time-varying volatility. It also mitigates the impact of outliers and distributional shifts that can disproportionately affect point forecasts.

5. Empirical Performance and Evaluation

The QRNN framework has been evaluated on full-resolution LOB data from high-liquidity stocks (e.g., Lloyds Bank, Barclays, Tesco, BT, Vodafone) in the London Stock Exchange (Zhang et al., 2019). Performance is measured with:

  • Mean Absolute Error (MAE)
  • Mean Squared Error (MSE)
  • Median Absolute Error (MeAE)
  • R2R^2 Score

Relative to both naive (zero-intelligence) baselines and standard machine-learning regressors (autoregressive models, GLR, SVR, MLPs), the QRNN approach consistently demonstrates improved accuracy and robustness across evaluation metrics.

6. Simultaneous Long/Short Return Modeling

A distinguishing feature of the QRNN approach in this context is its simultaneous modeling of conditional quantiles for both buy (long) and sell (short) positions. Separate branches in the architecture model the asymmetric return distributions for each, while the lower shared layers act as a feature backbone. This enables the capture of distinct distributional properties between long and short returns due to spread effects and market microstructure.

Auxiliary covariates (e.g., past returns) can be routed to appropriate branches to further refine estimation.

7. Implementation Details and Return Formulas

Key return transformation formulas as implemented in the model include: ri(t)=Δpi(t)pmid(t),Δpi(t)=zi(t)Δm(t)s(t)+s(t+k)2r_i(t) = \frac{\Delta p_i(t)}{p_{\text{mid}}(t)}, \quad \Delta p_i(t) = z_i(t)\Delta m(t) - \frac{s(t) + s(t+k)}{2} where zi(t)=±1z_i(t) = \pm 1 indicates position direction (long/short), s(t)s(t) is the spread, Δm(t)\Delta m(t) is the midprice change, and pmid(t)p_{\text{mid}}(t) is the contemporaneous midprice.

Effective returns adjusting for spread are: ri(t)=Δpi(t)pmid(t),Δpi(t)=Δpi(t)+s(t)r_i'(t) = \frac{\Delta p_i'(t)}{p_{\text{mid}}(t)}, \quad \Delta p_i'(t) = \Delta p_i(t) + s(t) with a further decomposition: ri(t)=zi(t)rmid(t)rspread(t)2,rmid(t)=Δm(t)pmid(t),rspread(t)=s(t+k)s(t)pmid(t)r_i'(t) = z_i(t) r_{\text{mid}}(t) - \frac{r_{\text{spread}}(t)}{2}, \quad r_{\text{mid}}(t) = \frac{\Delta m(t)}{p_\text{mid}(t)}, \quad r_\text{spread}(t) = \frac{s(t+k) - s(t)}{p_\text{mid}(t)}

These return definitions directly inform the targets for the quantile regression branches.

8. Significance and Use in Financial Modeling

By constructing a robust, multi-quantile predictive model, QRNNs enable practitioners to forecast not only the most likely outcome but also the full range of plausible future outcomes. This is especially critical in high-frequency finance, where the distribution of returns is non-Gaussian with pronounced skewness and kurtosis. QRNNs facilitate risk management strategies by quantifying tail risks, supporting value-at-risk estimation, and providing actionable predictive intervals for trading algorithms.

The approach as demonstrated in DeepLOB-QR (Zhang et al., 2019) represents an overview of state-of-the-art deep learning architectures (convolutional-LSTM feature extractors) with robust statistical estimation (multitask quantile loss), achieving empirically superior results and providing the uncertainty quantification that is crucial for robust decision making in dynamic environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Quantile Regression Neural Network.