Dice Question Streamline Icon: https://streamlinehq.com

Determining LLMs’ Ability to Enforce Contextual Integrity in Privacy Decisions

Determine the extent to which large language models can reliably reason about and enforce Contextual Integrity norms—specifically the sender, subject, recipient, data type, and transmission principles—to make context-appropriate privacy decisions when operating as autonomous agents in open-ended environments.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper notes that modern LLM agents access private data, process untrusted content, and communicate externally, creating powerful privacy leakage vectors. It emphasizes that current LLMs have limited capability to make context-appropriate privacy decisions and frames Contextual Integrity as the normative basis for judging appropriate data flows.

Given these challenges, the authors explicitly state uncertainty about how well LLMs can satisfy Contextual Integrity requirements and caution that, without establishing this capability, integrating agents broadly into users’ lives is premature. This motivates research to measure and validate LLMs’ abilities to apply CI parameters reliably in real-world scenarios.

References

The extent to which LLMs possess these capabilities remains uncertain, making it premature to reliably integrate LLM agents into open-ended environments with full access to our social lives.

Position: Privacy Is Not Just Memorization! (2510.01645 - Mireshghallah et al., 2 Oct 2025) in Section 3.3 Indirect Chat and Context Leakage via Input-Output Flow, Agent-Specific Risks (LLMs lack contextual privacy capabilities)