Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs (2108.13873v2)

Published 29 Aug 2021 in cs.CR, cs.CL, and cs.LG

Abstract: Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models. Although published as black-box APIs, the valuable models behind these services are still vulnerable to imitation attacks. Recently, a series of works have demonstrated that attackers manage to steal or extract the victim models. Nonetheless, none of the previous stolen models can outperform the original black-box APIs. In this work, we conduct unsupervised domain adaptation and multi-victim ensemble to showing that attackers could potentially surpass victims, which is beyond previous understanding of model extraction. Extensive experiments on both benchmark datasets and real-world APIs validate that the imitators can succeed in outperforming the original black-box models on transferred domains. We consider our work as a milestone in the research of imitation attack, especially on NLP APIs, as the superior performance could influence the defense or even publishing strategy of API providers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qiongkai Xu (33 papers)
  2. Xuanli He (43 papers)
  3. Lingjuan Lyu (131 papers)
  4. Lizhen Qu (68 papers)
  5. Gholamreza Haffari (141 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.