Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 178 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 56 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Adaptive-Horizon Ensemble Techniques

Updated 13 November 2025
  • Adaptive-horizon ensembles are algorithmic paradigms that dynamically select ensemble members and planning horizons based on instance-specific data for tailored forecasting and planning.
  • They leverage meta-learning frameworks to rank models and determine optimal ensemble sizes, aiming to minimize forecast errors like SMAPE and MAE.
  • In sequential decision-making, adaptive receding-horizon planning adjusts look-ahead spans and swarm sizes in MDPs, ensuring convergence and improved control performance.

An adaptive-horizon ensemble is an algorithmic paradigm in forecasting and planning where the configuration of an ensemble—its members and cardinality—is dynamically selected in response to instance-specific data and the desired prediction or plan horizon. It represents an advancement over static ensemble methods by incorporating meta-learning or optimization layers that tune both model diversity and effective planning length, leading to improved predictive accuracy and convergence properties. This article surveys the concept as implemented in time-series forecasting meta-learners (Vaiciukynas et al., 2020) and optimal plan synthesis over Markov Decision Processes (Lukina et al., 2016), detailing the technical frameworks, meta-model architectures, adaptive control strategies, evaluation results, and formal convergence guarantees.

1. Definition of Adaptive-Horizon Ensembles

An adaptive-horizon ensemble, as codified in “Two-Step Meta-Learning for Time-Series Forecasting Ensemble” (Vaiciukynas et al., 2020), is an ensemble whose composition and size are conditional on (a) specific time-series characteristics and (b) the forecasting horizon. The ensemble is not static; both the identities of the included models and how many are pooled are selected for each series-horizon instance by meta-models trained to maximize forecast performance. In “ARES: Adaptive Receding-Horizon Synthesis of Optimal Plans” (Lukina et al., 2016), the term reflects a variable planning horizon in sequential decision-making: the look-ahead horizon and the number of planning particles are adaptively expanded or contracted at run-time in response to the current search progress.

A plausible implication is that adaptive-horizon ensembles generalize fixed-horizon or static ensembles, offering greater capacity for instance-aware modeling and horizon-specific optimization.

2. Meta-Learning for Forecasting Ensembles

The adaptive-horizon ensemble in time-series forecasting leverages meta-learning for two key decisions: model ranking and ensemble size selection.

  • The framework comprises two Random Forest regression models:
    • A1, the "ranker": accepts 390 meta-features, horizon, and model ID; outputs a continuous ranking score for each of 22 base forecasting methods.
    • A2, the "capper": accepts only meta-features and horizon; outputs the optimal number of models (KK) to include.

The full meta-feature set consists of 130 descriptors, each computed on three transforms (original, first difference, log), spanning statistics from catch22, tsfeatures, stlfeats, hctsa summary, heterogeneity, portmanteau, stationarity, normality, kurtosis, skewness, Hurst exponent family, fractality, entropy, and anomaly classes. Meta-models are trained by splitting real-world series into train-test portions for various horizons; error metrics are computed (RMSE, MAE, MDAE, SMAPE, MAAPE, MASE), and model rankings averaged over these errors define targets for A1. A2 is supervised by the ensemble size minimizing forecast error when pooling the top-ranked forecasters.

The ensemble adaptation mechanism at test time is as follows:

  1. Compute 390 meta-features for the candidate series.
  2. Input horizon and data type.
  3. Query A1 for predicted ranking scores r^i\hat r_i, i=1,,22i=1,\ldots,22.
  4. Query A2 for suggested ensemble size K^\hat K.
  5. Select the top K^\hat K base models by ascending r^i\hat r_i.
  6. Re-fit each selected model, produce horizon-specific forecasts.
  7. Pool forecasts by simple average or reciprocal-rank weighted average.

This architecture enables horizon-by-horizon tuning of ensemble composition—not just model selection but also ensemble size—providing significant gains over benchmarks.

3. Adaptive Receding-Horizon Planning in MDPs

In ARES (Lukina et al., 2016), adaptive-horizon control applies particle-swarm optimization (PSO) within a receding-horizon planning framework for deterministic discrete-time MDPs. The approach decomposes optimal-plan synthesis into multiple levels with guaranteed improvement in a Lyapunov-style cost function.

Key features include:

  • The planning horizon (hh) and swarm size (pp) are dynamically adjusted at each level. If PSO cannot reach the next-level cost target, the horizon is increased up to a maximum; if further progress fails, the swarm size is incremented.
  • Importance Splitting resampling replicates successful states among an ensemble of MDP clones, discarding and replacing unsuccessful trajectories.
  • Each level enforces a cost gap Δi=J(si1)m(i1)\Delta_i=\tfrac{J(s_{i-1})}{m-(i-1)} to guarantee monotonic convergence towards a threshold φ\varphi.
  • When applied to the V-formation problem, ARES generated successful plans with a 95% rate for flocks of 7 birds, averaging 63 seconds per instance across 8,000 initial configurations.

This suggests that adaptive-horizon mechanisms are effective for avoiding local minima and offer formal convergence guarantees in optimal control settings.

4. Key Algorithms and Formulas

Forecast Ensemble Meta-Learning (Vaiciukynas et al., 2020):

  • Ranking score for method ii:

ri=1EeEranke(i)r_i = \frac{1}{|E|}\sum_{e\in E}\operatorname{rank}_e(i)

where E={RMSE, MAE, MDAE, SMAPE, MAAPE, MASE}E = \{\text{RMSE, MAE, MDAE, SMAPE, MAAPE, MASE}\}.

  • Ensemble size selection prediction:

K^=fA2(x)\hat K = f_{A2}(\mathbf{x})

with x\mathbf{x} as the meta-feature vector.

  • Pooling strategies for ensemble forecasts:
    • Unweighted:

    y^t+h=1Ki=1Ky^(i),t+h\hat y_{t+h} = \frac{1}{K}\sum_{i=1}^K\hat y_{(i), t+h} - Weighted (reciprocal-rank):

    wi=1/r(i)j=1K1/r(j),y^t+h=i=1Kwiy^(i),t+hw_i = \frac{1/r_{(i)}}{\sum_{j=1}^K 1/r_{(j)}},\quad \hat y_{t+h} = \sum_{i=1}^K w_i\hat y_{(i), t+h}

ARES Adaptive Planning (Lukina et al., 2016):

  • Dynamic cost decrement per level:

Δi=J(si1)m(i1)\Delta_i = \frac{J(s_{i-1})}{m-(i-1)}

  • PSO swarm and horizon update policy:

    • Increment horizon hh up to hmaxh_\text{max} if level not reached.
    • Increment swarm size pp up to pmaxp_\text{max} if horizon maxed.
    • Resample clones by success, apply PSO anew.
  • Formal convergence condition:

The level-by-level scheme ensures J(sm)φJ(s_m)\leq\varphi in at most mm steps.

5. Empirical Evaluation and Comparative Performance

  • Forecasting (Vaiciukynas et al., 2020): On the M4-micro dataset (12,561 series, expanded to 38,633 series-horizon pairs), the adaptive-horizon meta-learner achieved SMAPE of 9.21% (weighted ensemble), improving upon Theta (11.05%) and Comb (11.41%). Gains persisted across granularities (daily, weekly, monthly) and forecasting horizons, with 5–20% relative SMAPE improvements depending on the setup.
  • Planning (Lukina et al., 2016): ARES reached the cost threshold for flock formation with high success (95%) and efficient runtime. It is formally proven to guarantee convergence (given sufficient resources).

A plausible implication is that adaptive-horizon ensembles are empirically justified as robust strategies for both prediction and control tasks, particularly in settings where forecast/planning horizon and data characteristics jointly affect optimal model configuration.

6. Context and Theoretical Guarantees

Adaptive-horizon ensembles occupy a methodological space linking meta-learning, ensemble selection, horizon adaptation, and Lyapunov stability. In time-series forecasting, they provide a meta-learning operationalization for dynamically pooling ranked forecasters and tuning ensemble size. In MDP planning, adaptive horizon selection and clone resampling together support convergence proofs, minimality of plan length (if mm chosen optimally), and resistance to local minima via ensemble search diversity.

Both cited works explicitly demonstrate that the two-step meta-learning (for forecasting) and dynamic plan-level adaptation (for MDPs) lead to substantial practical improvements (as measured by SMAPE, MAAPE, MASE, and solution rates) and formal guarantees of cost threshold achievement.

7. Applications and Extensions

Adaptive-horizon ensembles are suited to domains where model/classifier/planner behavior is non-stationary across horizon lengths and input characteristics. In forecasting, this translates to business intelligence, economic series, and competition settings where horizon-specific behaviors are common. In planning, ARES can be customized into model-predictive control (MPC) regimes with adaptive receding horizons and ensemble-based exploration, especially in stochastic optimization and robotic motion planning.

This suggests further investigations may focus on extension to multivariate series, richer base model diversity, and integration with reinforcement learning or probabilistic graphical models to generalize the adaptive-horizon principle across predictive and prescriptive analytics. Applications where forecast horizon or planning depth strongly modulate optimal model selection are especially promising targets for future research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Adaptive-Horizon Ensemble.