Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores (2212.00612v1)

Published 1 Dec 2022 in cs.LG and cs.CR

Abstract: Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indistinguishable in individual shape, statistical distribution and prediction label between members and non-members. The experimental results show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss. Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks. For example, the inversion error is raised about 4+ times on the Facescrub530 classifier, and the attribute inference accuracy drops significantly when PURIFIER is deployed in our experiment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ziqi Yang (26 papers)
  2. Lijin Wang (25 papers)
  3. Da Yang (11 papers)
  4. Jie Wan (18 papers)
  5. Ziming Zhao (25 papers)
  6. Ee-Chien Chang (45 papers)
  7. Fan Zhang (686 papers)
  8. Kui Ren (170 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.