Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 85 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Learning Hyperparameters via a Data-Emphasized Variational Objective (2502.01861v2)

Published 3 Feb 2025 in cs.LG and stat.ML

Abstract: When training large flexible models on limited data, avoiding overfitting is a practical concern. Common grid search or smarter search methods rely on expensive separate runs at each candidate hyperparameter while carving out a validation set that reduces available training data. In this paper, we consider direct gradient-based learning of regularization hyperparameters on the full training set via the evidence lower bound ("ELBo") objective from Bayesian variational methods. We focus on scenarios where the model is over-parameterized for flexibility while the approximate posterior is chosen to be Gaussian with isotropic covariance for tractability, even though it cannot match the true posterior exactly. In such scenarios, we find the ELBo prioritizes posteriors that match the prior variance, which leads to severely underfitting the data. Instead, we recommend a data-emphasized ELBo that upweights the influence of the data likelihood relative to the prior. In Bayesian transfer learning of classifiers for text and images, our method reduces 88+ hour grid searches of past work to under 3 hours while delivering comparable accuracy. We further demonstrate how our approach enables efficient yet accurate approximations of Gaussian processes with learnable length-scale kernels.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.