Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Making a Science of Model Search (1209.5111v1)

Published 23 Sep 2012 in cs.CV and cs.NE

Abstract: Many computer vision algorithms depend on a variety of parameter choices and settings that are typically hand-tuned in the course of evaluating the algorithm. While such parameter tuning is often presented as being incidental to the algorithm, correctly setting these parameter choices is frequently critical to evaluating a method's full potential. Compounding matters, these parameters often must be re-tuned when the algorithm is applied to a new problem domain, and the tuning process itself often depends on personal experience and intuition in ways that are hard to describe. Since the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning, it can be difficult to determine whether a given technique is genuinely better, or simply better tuned. In this work, we propose a meta-modeling approach to support automated hyper parameter optimization, with the goal of providing practical tools to replace hand-tuning with a reproducible and unbiased optimization process. Our approach is to expose the underlying expression graph of how a performance metric (e.g. classification accuracy on validation examples) is computed from parameters that govern not only how individual processing steps are applied, but even which processing steps are included. A hyper parameter optimization algorithm transforms this graph into a program for optimizing that performance metric. Our approach yields state of the art results on three disparate computer vision problems: a face-matching verification task (LFW), a face identification task (PubFig83) and an object recognition task (CIFAR-10), using a single algorithm. More broadly, we argue that the formalization of a meta-model supports more objective, reproducible, and quantitative evaluation of computer vision algorithms, and that it can serve as a valuable tool for guiding algorithm development.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. J. Bergstra (1 paper)
  2. D. Yamins (1 paper)
  3. D. D. Cox (1 paper)
Citations (83)

Summary

We haven't generated a summary for this paper yet.