Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Software effort estimation based on optimized model tree (1703.05584v1)

Published 13 Mar 2017 in cs.SE

Abstract: Background: It is widely recognized that software effort estimation is a regression problem. Model Tree (MT) is one of the Machine Learning based regression techniques that is useful for software effort estimation, but as other machine learning algorithms, the MT has a large space of configuration and requires to carefully setting its parameters. The choice of such parameters is a dataset dependent so no general guideline can govern this process which forms the motivation of this work. Aims: This study investigates the effect of using the most recent optimization algorithm called Bees algorithm to specify the optimal choice of MT parameters that fit a dataset and therefore improve prediction accuracy. Method: We used MT with optimal parameters identified by the Bees algorithm to construct software effort estimation model. The model has been validated over eight datasets come from two main sources: PROMISE and ISBSG. Also we used 3-Fold cross validation to empirically assess the prediction accuracies of different estimation models. As benchmark, results are also compared to those obtained with Stepwise Regression Case-Based Reasoning and Multi-Layer Perceptron. Results: The results obtained from combination of MT and Bees algorithm are encouraging and outperforms other well-known estimation methods applied on employed datasets. They are also interesting enough to suggest the effectiveness of MT among the techniques that are suitable for effort estimation. Conclusions: The use of the Bees algorithm enabled us to automatically find optimal MT parameters required to construct effort estimation models that fit each individual dataset. Also it provided a significant improvement on prediction accuracy.

Citations (32)

Summary

We haven't generated a summary for this paper yet.