Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 102 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 30 tok/s
GPT-5 High 27 tok/s Pro
GPT-4o 110 tok/s
GPT OSS 120B 475 tok/s Pro
Kimi K2 203 tok/s Pro
2000 character limit reached

Learned Cost Models Overview

Updated 3 September 2025
  • Learned cost models are machine learning-based methods that predict resource consumption, latency, and operational metrics from historical and synthetic data.
  • They utilize regression, neural networks, and decision trees to replace traditional hand-tuned cost estimators, offering enhanced flexibility and accuracy.
  • Applications include query optimization, software project forecasting, and system configuration, while addressing challenges like generalization, explainability, and uncertainty quantification.

A learned cost model is a predictive method, typically based on machine learning, that estimates resource consumption, latency, or other operational metrics as a function of system inputs or features. These models are integrated into analytical or data-driven systems to enhance cost estimation for decision-making tasks such as software project forecasting, data management, query optimization, and system configuration. Unlike manually crafted or purely statistical approaches, learned cost models are trained on historical or synthetic data and can leverage diverse, high-dimensional features for improved flexibility and accuracy.

1. Foundations and Concepts

Learned cost models replace or augment hand-tuned, formulaic cost estimation with flexible paradigms that infer cost directly from data. The central idea is to model the complex, often non-linear mapping from descriptive features (project attributes, query plans, operator parameters, hardware characteristics) to cost metrics (such as total effort, execution time, or resource usage) using machine learning algorithms.

Critical to the adoption of learned cost models are:

  • Their ability to handle heterogeneous, incomplete, and noisy data sources (Heidrich et al., 2014).
  • Their utility in modeling distributions and relationships that are empirically observed rather than assumed.
  • Their extensibility to encode and optimize for domain-specific cost functions that may capture business, operational, or user-centric constraints (Spiegel et al., 2018, Zellinger et al., 4 Jul 2025).

Learned cost models are typically evaluated not just by prediction error but by their real-world impact on system behaviors—e.g., the quality of selected plans in a query optimizer or the overall cost efficiency in deployment scenarios (Heinrich et al., 3 Feb 2025, Heinrich et al., 13 Mar 2024).

2. Methodologies and Algorithms

The construction of learned cost models entails several recurring stages:

  • Data Acquisition and Feature Engineering: Collection of cost-relevant data, often involving unification of data scales, semantic transformation, and the handling of missing values and outliers (Heidrich et al., 2014).
  • Model Training: Application of regression algorithms (e.g., decision trees, random forests, neural networks, Elastic Net, or meta-ensembles) to map input features to cost, with loss functions tailored to the cost metric under consideration (e.g., mean squared error, mean squared log error, Q-error) (Siddiqui et al., 2020, Hilprecht et al., 2022).
  • Evaluation Metrics: Adoption of application-specific metrics; for project estimation, metrics include Mean Magnitude of Relative Error (MMRE), Mean Squared Deviation (MSD), and Mean Absolute Deviation (MAD); for database queries and workloads, common metrics include Q-error, rank correlation (ρ), selected runtime, pick rate, and balanced accuracy (Heinrich et al., 3 Feb 2025, Hilprecht et al., 2022). For ML programs, IO/computation/latency are linearized into time (Boehm, 2015).
  • Integration and Tuning: Parameter tuning, data clustering, and the hybridization with traditional (often analytical) cost models to improve predictions and ensure robust generalization (Ding et al., 2019, Zhang et al., 2021, Peng et al., 18 Jun 2025).

Formally, an example evaluation equation:

MMRE=1ni=1nActualiPredictediActuali\text{MMRE} = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{\text{Actual}_i - \text{Predicted}_i}{\text{Actual}_i} \right|

and for distributed resource configuration:

costθPP+θcP,Popt=θP/θc\text{cost} \propto \frac{\theta_P}{P} + \theta_c \cdot P, \quad P_{\text{opt}} = \sqrt{\theta_P/\theta_c}

where PP is the number of partitions/containers (Siddiqui et al., 2020).

3. Applications in Software Effort and Industrial Cost Estimation

Case studies in industrial settings such as software project estimation highlight unique requirements for handling heterogeneity and data imperfection:

  • Optimized Set Reduction (OSR®): An iterative set reduction strategy applies Boolean predicates derived from multi-dimensional project features to identify a similitude cluster, then estimates target attributes (e.g., cost) via simple statistics (mean, median) on that cluster (Heidrich et al., 2014).
  • Data Quality: Preprocessing is critical: unifying categorical representations, cleaning outliers, filling or ignoring incomplete entries, and clustering for increased homogeneity directly improve estimation accuracy.
  • Parameter Sensitivity: Extensive grid search or exploratory parameter selection is required to optimize minimal set size, predicate complexity, and objective function for best predictive performance.
  • Findings: Clustering projects into homogeneous groups and harnessing multi-feature estimation substantially outperforms single-attribute or purely size-based models. OSR® empirically achieves lower MMRE than linear regression, especially in curated or clustered data subsets. However, context-dependence is observed: in some scenarios, regression may still outperform if preprocessing is inadequate (Heidrich et al., 2014).

4. Integration in Data Management and System Optimization

Learned cost models have been integrated at multiple layers of modern data management systems:

5. Challenges, Limitations, and Explainability

Despite notable prediction improvements, learned cost models encounter several persistent challenges:

  • Generalization and Training Costs: High accuracy is observed in static, curated scenarios, but training/inference cost and adaptation to dynamic or out-of-distribution workloads remain significant hurdles (Wang et al., 2020, Heinrich et al., 3 Feb 2025). In particular, retraining overhead and robustness to shifts in data correlations, skew, and domain size are open issues.
  • Plan Quality vs. Prediction Accuracy: Even state-of-the-art learned models with superior Q-error may underperform traditional models in plan selection tasks due to poor monotonicity, ranking, or calibration—highlighting the need for metrics tied to end-to-end decision quality (e.g., surpassed plan fraction, actual runtime, balanced accuracy in operator access path selection) (Heinrich et al., 3 Feb 2025).
  • Data and Model Biases: Reliance on pre-optimized training data leads to over-preferring certain operator classes or search paths. Diversification and coverage strategies in synthetic or bootstrapped SQL generation improve coverage and reduce training requirements (Nidd et al., 27 Aug 2025).
  • Explainability and Debugging: Recent work adapts explainability methods (gradient-based, perturbation-based, tailored graph maskers) to LCMs, providing node-level attributions and metrics (e.g., fidelity, runtime correlation) to open the black box, diagnose outlier predictions, and systematically improve model reliability (Heinrich et al., 19 Jul 2025).
  • Interpreting Economic Impact: Learned cost models must sometimes incorporate domain-specific cost functions (affine or composite measures) optimized for business or operational targets (e.g., predictive maintenance cost, economic error price in LLM deployment) (Spiegel et al., 2018, Zellinger et al., 4 Jul 2025).

6. Future Research and Directions

Recent empirical and methodological findings suggest the following research frontiers for learned cost models:

  • Hybridization with Expert Systems: Integrating traditional analytical estimates as input features or priors into LCM architectures improves robustness, calibration, and interpretability, especially when training data is sparse or heavily biased (Heinrich et al., 3 Feb 2025).
  • Zero-shot and Few-shot Learning: Universal pre-trained cost models that leverage transferable representations can drastically reduce data requirements for new systems or workloads and maintain high accuracy under schema and distribution drift (Hilprecht et al., 2022, Heinrich et al., 13 Mar 2024).
  • Robustness and Uncertainty Quantification: Incorporating explicit risk measures (e.g., via modeling cost distributions, uncertainty estimation) in optimization and deployment more closely aligns cost model selection with real-world robustness and suboptimality constraints (Kamali et al., 26 Jan 2024).
  • Explainability-Driven Improvement: Systematic explainability for learned cost models enables transparent debugging, targeted retraining, and model revisions, paving the way toward trustworthy integration into critical production environments (Heinrich et al., 19 Jul 2025).
  • Adaptive Data-Driven Generation: Bootstrapping LCMs using LLM-driven or generative AI–based SQL query generation promises higher-quality, more diverse training corpora, leading to improved model efficiency with reduced labeled samples (Nidd et al., 27 Aug 2025).

The continued convergence of domain knowledge, machine learning, and explainability provides a path toward learned cost models that are both accurate and practically reliable for the diverse and evolving demands of complex systems.