Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 38 tok/s Pro
GPT-5 Medium 19 tok/s
GPT-5 High 23 tok/s Pro
GPT-4o 87 tok/s
GPT OSS 120B 464 tok/s Pro
Kimi K2 171 tok/s Pro
2000 character limit reached

A Method for Restoring the Training Set Distribution in an Image Classifier (1802.01435v1)

Published 5 Feb 2018 in stat.ML, cs.AI, and cs.CV

Abstract: Convolutional Neural Networks are a well-known staple of modern image classification. However, it can be difficult to assess the quality and robustness of such models. Deep models are known to perform well on a given training and estimation set, but can easily be fooled by data that is specifically generated for the purpose. It has been shown that one can produce an artificial example that does not represent the desired class, but activates the network in the desired way. This paper describes a new way of reconstructing a sample from the training set distribution of an image classifier without deep knowledge about the underlying distribution. This enables access to the elements of images that most influence the decision of a convolutional network and to extract meaningful information about the training distribution.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.