Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers (2106.11719v2)

Published 22 Jun 2021 in cs.LG and stat.ML

Abstract: Expanding on MacKay (1992), we argue that conventional model-based methods for active learning - like BALD - have a fundamental shortfall: they fail to directly account for the test-time distribution of the input variables. This can lead to pathologies in the acquisition strategy, as what is maximally informative for model parameters may not be maximally informative for prediction: for example, when the data in the pool set is more dispersed than that of the final prediction task, or when the distribution of pool and test samples differs. To correct this, we revisit an acquisition strategy that is based on maximizing the expected information gained about possible future predictions, referring to this as the Expected Predictive Information Gain (EPIG). As EPIG does not scale well for batch acquisition, we further examine an alternative strategy, a hybrid between BALD and EPIG, which we call the Joint Expected Predictive Information Gain (JEPIG). We consider using both for active learning with Bayesian neural networks on a variety of datasets, examining the behavior under distribution shift in the pool set.

Citations (20)

Summary

We haven't generated a summary for this paper yet.