Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
132 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Who's Afraid of Thomas Bayes? (2107.14601v1)

Published 30 Jul 2021 in cs.LG and cs.CR

Abstract: In many cases, neural networks perform well on test data, but tend to overestimate their confidence on out-of-distribution data. This has led to adoption of Bayesian neural networks, which better capture uncertainty and therefore more accurately reflect the model's confidence. For machine learning security researchers, this raises the natural question of how making a model Bayesian affects the security of the model. In this work, we explore the interplay between Bayesianism and two measures of security: model privacy and adversarial robustness. We demonstrate that Bayesian neural networks are more vulnerable to membership inference attacks in general, but are at least as robust as their non-Bayesian counterparts to adversarial examples.

Summary

We haven't generated a summary for this paper yet.