Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder (2009.08016v4)

Published 17 Sep 2020 in cs.CV, cs.LG, and eess.IV

Abstract: Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm because of dimensionality reduction in the DNN models. We also show that Glow likelihood-based OOD detection is breakable as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Liang Liang (24 papers)
  2. Linhai Ma (9 papers)
  3. Linchen Qian (5 papers)
  4. Jiasong Chen (7 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.