Papers
Topics
Authors
Recent
2000 character limit reached

Towards explainable meta-learning

Published 11 Feb 2020 in stat.ML and cs.LG | (2002.04276v2)

Abstract: Meta-learning is a field that aims at discovering how different machine learning algorithms perform on a wide range of predictive tasks. Such knowledge speeds up the hyperparameter tuning or feature engineering. With the use of surrogate models various aspects of the predictive task such as meta-features, landmarker models e.t.c. are used to predict the expected performance. State of the art approaches are focused on searching for the best meta-model but do not explain how these different aspects contribute to its performance. However, to build a new generation of meta-models we need a deeper understanding of the importance and effect of meta-features on the model tunability. In this paper, we propose techniques developed for eXplainable Artificial Intelligence (XAI) to examine and extract knowledge from black-box surrogate models. To our knowledge, this is the first paper that shows how post-hoc explainability can be used to improve the meta-learning.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.