Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Defending Against Adversarial Attacks Using Random Forests (1906.06765v1)

Published 16 Jun 2019 in cs.CV and cs.CR

Abstract: As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world. Unfortunately, some recent studies show that adversarial examples, which are hard to be distinguished from real examples, can easily fool DNNs and manipulate their predictions. Upon observing that adversarial examples are mostly generated by gradient-based methods, in this paper, we first propose to use a simple yet very effective non-differentiable hybrid model that combines DNNs and random forests, rather than hide gradients from attackers, to defend against the attacks. Our experiments show that our model can successfully and completely defend the white-box attacks, has a lower transferability, and is quite resistant to three representative types of black-box attacks; while at the same time, our model achieves similar classification accuracy as the original DNNs. Finally, we investigate and suggest a criterion to define where to grow random forests in DNNs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yifan Ding (44 papers)
  2. Liqiang Wang (51 papers)
  3. Huan Zhang (171 papers)
  4. Jinfeng Yi (61 papers)
  5. Deliang Fan (49 papers)
  6. Boqing Gong (100 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.