Dice Question Streamline Icon: https://streamlinehq.com

General-purpose HPO for unsupervised learning

Develop general-purpose hyperparameter optimization algorithms for unsupervised learning, where suitable response functions and performance metrics are not readily available, so that such methods can be applied across tasks like anomaly detection and generative modeling without requiring task-specific tuning procedures.

Information Square Streamline Icon: https://streamlinehq.com

Background

The monograph emphasizes that most HPO formulations assume well-defined response functions derived from supervised metrics (e.g., validation loss or error). In unsupervised settings, selecting appropriate performance measures is challenging, particularly for tasks like anomaly detection and generative modeling, where commonly used metrics may capture only facets such as fidelity and not diversity or broader distributional similarity.

The authors review attempts at evaluation (e.g., precision/recall-style metrics for generative models and internal criteria for anomaly detection) and conclude that these have not yet yielded generally effective bases for HPO. They explicitly flag the need for methods that can systematically optimize hyperparameters in unsupervised scenarios without relying on narrowly tailored metrics.

References

Still, the development of general purpose HPO algorithms for unsupervised learning remains an open problem.

Hyperparameter Optimization in Machine Learning (2410.22854 - Franceschi et al., 30 Oct 2024) in Conclusions, subsection "Response functions for unsupervised learning"