Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hyperparameter Optimization for Effort Estimation (1805.00336v4)

Published 28 Apr 2018 in cs.SE

Abstract: Software analytics has been widely used in software engineering for many tasks such as generating effort estimates for software projects. One of the "black arts" of software analytics is tuning the parameters controlling a data mining algorithm. Such hyperparameter optimization has been widely studied in other software analytics domains (e.g. defect prediction and text mining) but, so far, has not been extensively explored for effort estimation. Accordingly, this paper seeks simple, automatic, effective and fast methods for finding good tunings for automatic software effort estimation. We introduce a hyperparameter optimization architecture called OIL (Optimized Inductive Learning). We test OIL on a wide range of hyperparameter optimizers using data from 945 software projects. After tuning, large improvements in effort estimation accuracy were observed (measured in terms of standardized accuracy). From those results, we recommend using regression trees (CART) tuned by different evolution combine with default analogy-based estimator. This particular combination of learner and optimizers often achieves in a few hours what other optimizers need days to weeks of CPU time to accomplish. An important part of this analysis is its reproducibility and refutability. All our scripts and data are on-line. It is hoped that this paper will prompt and enable much more research on better methods to tune software effort estimators.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianpei Xia (11 papers)
  2. Rahul Krishna (28 papers)
  3. Jianfeng Chen (33 papers)
  4. George Mathew (14 papers)
  5. Xipeng Shen (28 papers)
  6. Tim Menzies (128 papers)
Citations (48)

Summary

We haven't generated a summary for this paper yet.