Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Testing the Stationarity Assumption in Software Effort Estimation Datasets (2012.08692v1)

Published 16 Dec 2020 in cs.SE

Abstract: Software effort estimation (SEE) models are typically developed based on an underlying assumption that all data points are equally relevant to the prediction of effort for future projects. The dynamic nature of several aspects of the software engineering process could mean that this assumption does not hold in at least some cases. This study employs three kernel estimator functions to test the stationarity assumption in three software engineering datasets that have been used in the construction of software effort estimation models. The kernel estimators are used in the generation of non-uniform weights which are subsequently employed in weighted linear regression modeling. Prediction errors are compared to those obtained from uniform models. Our results indicate that, for datasets that exhibit underlying non-stationary processes, uniform models are more accurate than non-uniform models. In contrast, the accuracy of uniform and non-uniform models for datasets that exhibited stationary processes was essentially equivalent. The results of our study also confirm prior findings that the accuracy of effort estimation models is independent of the type of kernel estimator function used in model development.

Summary

We haven't generated a summary for this paper yet.