Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization (2306.05561v1)

Published 8 Jun 2023 in cs.CL

Abstract: This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained LLMs, on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques to better balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available

Citations (6)

Summary

We haven't generated a summary for this paper yet.