Papers
Topics
Authors
Recent
2000 character limit reached

Hyperparameter Optimization Is Deceiving Us, and How to Stop It (2102.03034v4)

Published 5 Feb 2021 in cs.LG and cs.LO

Abstract: Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research. When comparing two algorithms J and K searching one subspace can yield the conclusion that J outperforms K, whereas searching another can entail the opposite. In short, the way we choose hyperparameters can deceive us. We provide a theoretical complement to this prior work, arguing that, to avoid such deception, the process of drawing conclusions from HPO should be made more rigorous. We call this process epistemic hyperparameter optimization (EHPO), and put forth a logical framework to capture its semantics and how it can lead to inconsistent conclusions about performance. Our framework enables us to prove EHPO methods that are guaranteed to be defended against deception, given bounded compute time budget t. We demonstrate our framework's utility by proving and empirically validating a defended variant of random search.

Citations (27)

Summary

  • The paper introduces a formal EHPO framework that reveals and mitigates the bias from arbitrary hyperparameter settings in ML research.
  • It proposes a defended random search method that groups trials and draws conclusions only when unanimous, ensuring robust performance evaluation.
  • Empirical validations on benchmarks like CIFAR-10 show that this approach prevents inconsistent outcomes from traditional hyperparameter optimization.

An Examination of Hyperparameter Deception in Machine Learning Research

In ML, the optimization of hyperparameters (HPs) is crucial for training algorithms to perform effectively on various tasks. The paper under discussion provides a comprehensive theoretical framework for understanding and mitigating the issue of hyperparameter deception, where different choices in hyperparameter optimization (HPO) configurations can lead to inconsistent conclusions about algorithm performance. This essay aims to elucidate the key contributions and implications of this work, primarily for experienced researchers in computer science and ML.

The paper begins by noting a prevalent issue in ML research: inconsistent performance outcomes driven by varying HPO configurations. For instance, choosing one set of hyperparameters might indicate that Algorithm J outperforms Algorithm K, while another set might suggest the opposite. This discrepancy highlights a critical flaw in current HPO practices, where ad-hoc decisions can substantially impact empirical results, leading to what the authors term "hyperparameter deception."

Formalizing Hyperparameter Deception

The authors propose an approach called Epistemic Hyperparameter Optimization (EHPO) to address this issue. EHPO formalizes the process of drawing conclusions from HPO using a logical framework, thereby enhancing the rigor with which these conclusions are derived. The central thesis is that if an adversary (modeled as an evil demon) can control the hyper-HP configuration and thereby manipulate the outcomes of HPO within a reasonable computational budget, then the conclusions drawn from such HPO are unreliable.

To model this, the paper introduces a multimodal logic framework combining two primary modal operators: t_t for representing possibilities given bounded compute time tt, and B\mathcal{B} for representing beliefs or conclusions about algorithm performance. The core axioms of this logic address the potential for being deceived by HPO, encompassing both the randomness in HPO and the human element of hyper-HP selection.

Defense Against Deception

Two significant contributions arise from this theoretical framework. First, the authors prove that it is possible to define a "defended reasoner"—a way of drawing conclusions from HPO that is guaranteed to avoid hyperparameter deception, given a finite time budget tt. The defended reasoner B∗\mathcal{B}_* concludes a proposition pp (such as "Algorithm J outperforms Algorithm K") only if it is impossible for any adversary to make it conclude ¬p\lnot p within the same time budget.

Second, the paper proposes a concrete implementation of this defended reasoner using a variant of random search. This method involves running multiple independent trials of random search, dividing the trials into groups, and only drawing conclusions if all groups unanimously agree on the outcome. This approach mitigates the impact of any single biased set of hyper-HPs and ensures that the conclusions are robust to variations in the HPO configuration.

Empirical Validation

The theoretical insights are empirically validated through experiments on well-known benchmarks, such as training VGG16 on CIFAR-10 using different optimizers. The experiments show that traditional grid search can yield contradictory conclusions about the relative performance of SGD, Heavy Ball, and Adam optimizers when different hyper-HP configurations are used. By contrast, the defended random search approach consistently avoids such contradictory outcomes, thereby proving its robustness against hyperparameter deception.

Implications and Future Directions

The implications of this research are profound for both practical and theoretical aspects of ML. Practically, it calls for a shift in how the community approaches HPO. Researchers should adopt more rigorous methods, such as the proposed defended random search, to ensure the reliability of empirical findings. Theoretically, this work opens avenues for further research into formal methods for verification and robustness in ML, extending beyond hyperparameters to other aspects of the ML pipeline.

Moreover, the insights from this paper are pertinent for new areas in ML, such as meta-learning and neural architecture search (NAS), where automated methods guide the selection of hyperparameters and model architectures. Ensuring robustness in these settings is critical for their broader adoption and acceptance in high-stakes applications.

Conclusion

The paper makes a compelling case for addressing hyperparameter deception in ML through rigorous logical frameworks and defended HPO methods. By providing both theoretical foundations and practical implementations, it sets a new standard for reliability and robustness in empirical ML research. The community is encouraged to adopt these practices and continue exploring ways to safeguard against biases and inconsistencies in model evaluation. This work significantly contributes to the ongoing efforts to make ML a more robust and scientifically rigorous field.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 830 likes.

Upgrade to Pro to view all of the tweets about this paper: