Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Visual Hallmarks of Robustness to Adversarial Malware (1805.03553v1)

Published 9 May 2018 in cs.LG, cs.CR, cs.HC, and stat.ML

Abstract: A central challenge of adversarial learning is to interpret the resulting hardened model. In this contribution, we ask how robust generalization can be visually discerned and whether a concise view of the interactions between a hardened decision map and input samples is possible. We first provide a means of visually comparing a hardened model's loss behavior with respect to the adversarial variants generated during training versus loss behavior with respect to adversarial variants generated from other sources. This allows us to confirm that the association of observed flatness of a loss landscape with generalization that is seen with naturally trained models extends to adversarially hardened models and robust generalization. To complement these means of interpreting model parameter robustness we also use self-organizing maps to provide a visual means of superimposing adversarial and natural variants on a model's decision space, thus allowing the model's global robustness to be comprehensively examined.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alex Huang (4 papers)
  2. Abdullah Al-Dujaili (15 papers)
  3. Erik Hemberg (27 papers)
  4. Una-May O'Reilly (43 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.