Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ER-AE: Differentially Private Text Generation for Authorship Anonymization (1907.08736v4)

Published 20 Jul 2019 in cs.CR, cs.CL, and cs.LG

Abstract: Most of privacy protection studies for textual data focus on removing explicit sensitive identifiers. However, personal writing style, as a strong indicator of the authorship, is often neglected. Recent studies, such as SynTF, have shown promising results on privacy-preserving text mining. However, their anonymization algorithm can only output numeric term vectors which are difficult for the recipients to interpret. We propose a novel text generation model with a two-set exponential mechanism for authorship anonymization. By augmenting the semantic information through a REINFORCE training reward function, the model can generate differentially private text that has a close semantic and similar grammatical structure to the original text while removing personal traits of the writing style. It does not assume any conditioned labels or paralleled text data for training. We evaluate the performance of the proposed model on the real-life peer reviews dataset and the Yelp review dataset. The result suggests that our model outperforms the state-of-the-art on semantic preservation, authorship obfuscation, and stylometric transformation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Haohan Bo (1 paper)
  2. Steven H. H. Ding (8 papers)
  3. Benjamin C. M. Fung (15 papers)
  4. Farkhund Iqbal (4 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.