Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

$A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation (1711.01921v3)

Published 6 Nov 2017 in cs.CR, cs.CL, cs.CY, cs.SI, and stat.ML

Abstract: Text-based analysis methods allow to reveal privacy relevant author attributes such as gender, age and identify of the text's author. Such methods can compromise the privacy of an anonymous author even when the author tries to remove privacy sensitive content. In this paper, we propose an automatic method, called Adversarial Author Attribute Anonymity Neural Translation ($A4NT$), to combat such text-based adversaries. We combine sequence-to-sequence LLMs used in machine translation and generative adversarial networks to obfuscate author attributes. Unlike machine translation techniques which need paired data, our method can be trained on unpaired corpora of text containing different authors. Importantly, we propose and evaluate techniques to impose constraints on our $A4NT$ to preserve the semantics of the input text. $A4NT$ learns to make minimal changes to the input text to successfully fool author attribute classifiers, while aiming to maintain the meaning of the input. We show through experiments on two different datasets and three settings that our proposed method is effective in fooling the author attribute classifiers and thereby improving the anonymity of authors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Rakshith Shetty (9 papers)
  2. Bernt Schiele (210 papers)
  3. Mario Fritz (160 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.