Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Difficulty of Defending Self-Supervised Learning against Model Extraction (2205.07890v3)

Published 16 May 2022 in cs.LG, cs.AI, and cs.CR

Abstract: Self-Supervised Learning (SSL) is an increasingly popular ML paradigm that trains models to transform complex inputs into representations without relying on explicit labels. These representations encode similarity structures that enable efficient learning of multiple downstream tasks. Recently, ML-as-a-Service providers have commenced offering trained SSL models over inference APIs, which transform user inputs into useful representations for a fee. However, the high cost involved to train these models and their exposure over APIs both make black-box extraction a realistic security threat. We thus explore model stealing attacks against SSL. Unlike traditional model extraction on classifiers that output labels, the victim models here output representations; these representations are of significantly higher dimensionality compared to the low-dimensional prediction scores output by classifiers. We construct several novel attacks and find that approaches that train directly on a victim's stolen representations are query efficient and enable high accuracy for downstream models. We then show that existing defenses against model extraction are inadequate and not easily retrofitted to the specificities of SSL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Adam Dziedzic (47 papers)
  2. Nikita Dhawan (7 papers)
  3. Muhammad Ahmad Kaleem (7 papers)
  4. Jonas Guan (4 papers)
  5. Nicolas Papernot (123 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.