Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Private are Language Models in Abstractive Summarization? (2412.12040v1)

Published 16 Dec 2024 in cs.CL

Abstract: LLMs (LMs) have shown outstanding performance in text summarization including sensitive domains such as medicine and law. In these settings, it is important that personally identifying information (PII) included in the source document should not leak in the summary. Prior efforts have mostly focused on studying how LMs may inadvertently elicit PII from training data. However, to what extent LMs can provide privacy-preserving summaries given a non-private source document remains under-explored. In this paper, we perform a comprehensive study across two closed- and three open-weight LMs of different sizes and families. We experiment with prompting and fine-tuning strategies for privacy-preservation across a range of summarization datasets across three domains. Our extensive quantitative and qualitative analysis including human evaluation shows that LMs often cannot prevent PII leakage on their summaries and that current widely-used metrics cannot capture context dependent privacy risks.

Privacy Implications of LLMs in Abstractive Summarization

The paper "How Private are LLMs in Abstractive Summarization?" addresses the critical concern of privacy in text summarization tasks, especially within sensitive domains like medicine and law. This work stands apart by shifting focus from the conventional investigation of privacy risks stemming from training data to examining how LLMs (LMs) perform in maintaining privacy when summarizing non-private source documents.

Study Design and Methodology

The authors conducted an extensive analysis involving both closed- (GPT-4o and Claude Sonnet 3.5) and open-weight models (including Llama-3.1 and Mistral), employing various sizes and architectures. The investigation was carried out across diverse datasets spanning medicine, law, and general news, with particular emphasis on sensitive text—medical records and legal documents. The methodology was comprehensive, incorporating both prompting variations (0-shot, 1-shot, and anonymize techniques) and instruction fine-tuning (IFT) of open-weight models. Evaluation metrics included both qualitative assessments (human evaluation) and quantitative measures (ROUGE, BERTScore, and privacy-specific metrics like Leaked Documents Ratio (LDR) and Private Token Ratio (PTR)).

Key Findings

  1. Privacy Leakage Prevalence: The paper finds that LMs frequently fail to prevent PII leakage, even when explicitly instructed to avoid it. This is especially prominent in zero-shot scenarios, where models generally exhibit higher PTR values. Smaller models, notably open-weight ones, are particularly vulnerable compared to their larger counterparts.
  2. Impact of Prompting and Fine-Tuning: The utilization of 1-shot prompts showed marked improvements in both privacy preservation and summarization quality, demonstrating the additional context's effectiveness. Furthermore, privacy and utility were enhanced significantly through instruction fine-tuning, allowing open-weight models to reach and even surpass closed-source models in some tasks, highlighting IFT's potential in training models on specific privacy-preserving behaviors.
  3. Model Comparison: Closed-source models generally outperformed open-weight models, particularly in raw ability to generate high-quality summaries without leaking PII. However, when enhanced with IFT, open-weight models such as IFT-Llama-3.1-70B showed substantial improvement, matching the closed-source models' performance in terms of privacy metrics.
  4. Challenges in Broader Domains: The paper notes that maintaining privacy in less structured domains, such as news, presents additional challenges. This is possibly due to the difficulty in discerning relevant from irrelevant PII without specific domain guidelines.

Implications and Future Work

This research highlights the ongoing privacy challenges implicit in deploying LMs for abstractive summarization, especially in privacy-sensitive areas. The findings underscore the necessity to enhance LM architectures further or develop more sophisticated anonymization strategies to ensure robust privacy-preserving capabilities. From a practical perspective, the paper suggests leveraging instruction fine-tuning as an effective approach for training models to adhere to domain-specific privacy needs.

Future investigations could benefit from exploring multimodal summarization tasks, incorporating images or structured data that could further complicate privacy preservation. Additionally, extending the research to include larger datasets and real-world application scenarios can shed light on how well these models perform outside of controlled experimental environments. There is also merit in examining dynamic, user-interactive settings where privacy risks can be more pronounced due to unscripted exchanges.

In conclusion, while significant strides have been made in improving LLM privacy, ongoing efforts will be crucial in ensuring these technologies can be widely—yet safely—adopted across various domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Anthony Hughes (2 papers)
  2. Nikolaos Aletras (72 papers)
  3. Ning Ma (39 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com