Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data (2101.08030v1)

Published 20 Jan 2021 in cs.CR, cs.AI, and cs.LG

Abstract: Guaranteeing the security of transactional systems is a crucial priority of all institutions that process transactions, in order to protect their businesses against cyberattacks and fraudulent attempts. Adversarial attacks are novel techniques that, other than being proven to be effective to fool image classification models, can also be applied to tabular data. Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the AI system to return incorrect outputs that are advantageous for the attacker. In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced tabular data, in the context of fraud detection. Experimental results show that the proposed modifications lead to a perfect attack success rate, obtaining adversarial examples that are also less perceptible when analyzed by humans. Moreover, when applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Francesco Cartella (1 paper)
  2. Orlando Anunciacao (1 paper)
  3. Yuki Funabiki (1 paper)
  4. Daisuke Yamaguchi (2 papers)
  5. Toru Akishita (1 paper)
  6. Olivier Elshocht (2 papers)
Citations (65)

Summary

We haven't generated a summary for this paper yet.