Position: Contextual Integrity is Inadequately Applied to Language Models (2501.19173v2)
Abstract: Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of LLMs. This is an encouraging development. The CI theory emphasizes sharing information in accordance with privacy norms and can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs. This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory's fundamental tenets. Inadequate applications of CI could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.