Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks (2310.07879v2)

Published 11 Oct 2023 in cs.HC

Abstract: Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.

A Taxonomy of AI Privacy Risks: An Overview

In the paper "Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks" by Lee et al., the authors propose a comprehensive taxonomy that seeks to understand how modern advances in AI and Machine Learning (ML) change the landscape of privacy risks. Their work is founded on an analysis of 321 documented AI privacy incidents sourced from the AI, Algorithmic, and Automation Incident and Controversy (AIAAIC) repository. With the backdrop of Solove's well-known privacy taxonomy from 2006, this paper aims to articulate the novel privacy risks introduced by AI and how existing privacy threats have been exacerbated.

The taxonomy defines twelve high-level privacy risks that can either be newly created or intensified by AI technologies. These include risks such as identification through low-quality data and the resurgence of physiognomy, where AI erroneously associates physical attributes with personal traits. The analysis reveals that AI-specific capabilities and requirements frequently alter privacy risks and provocatively argues that traditional privacy-preserving methods like federated learning and differential privacy overlook several unique threats posed by AI systems.

Key Findings

  1. Data Collection and Processing Risk Dimension:
    • The research identifies Surveillance as a key risk exacerbated by AI's ability to facilitate large-scale data aggregation across diverse sources. It shows how AI systems collect vast amounts of personal data to enhance model performance, elevating the concealed gathering and analysis.
    • New processing risks arise with AI's ability to robustly identify individuals and deduce future behaviors, often from low-quality or incomplete datasets. This ability poses significant risks in various sectors, including law enforcement and personalized marketing.
  2. Creation of Novel Privacy Risks:
    • AI creates an entirely new risk category labeled as Phrenology/Physiognomy, where it inadvertently promotes debunked pseudosciences by attempting to infer personality traits and demographic characteristics like criminality or sexual orientation from physical appearance alone.
    • The paper highlights exposure and distortion risks where generative AI technologies produce realistic yet fake images or videos (e.g., deepfakes), threatening personal privacy by generating non-consensual content.
  3. Data Dissemination and Invasion:
    • AI exacerbates Disclosure risks through enhanced inferential capabilities, making it easier to predict sensitive individual data, as seen in contexts like China's Safe City projects for public surveillance.
    • With Intrusion, AI extends the reach of invasive technologies, turning ubiquitous devices into constant surveillance tools, hence disrupting personal solitude beyond traditional means.

Implications and Future Directions

The implications of this work span theoretical and practical domains. Practically, the taxonomy offers actionable insights for the design of AI privacy-preserving systems by demonstrating that many current privacy-protective measures only address a subset of AI-induced risks. Future AI development must account for this broader set of risks, requiring novel methodological advancements tailored to the intricate dynamics of AI technologies.

Theoretically, this paper invites further exploration of privacy risks in AI-driven systems that may not have been documented yet but could emerge as AI continues to infiltrate diverse sectors. Potential future risks include interrogation through AI-driven conversation tools, breaches of trust with AI mediating confidential interactions, and AI's role in enhancing or inducing new forms of decisional interference.

As the landscape of AI capabilities evolves, the taxonomy is seen as a living document that researchers and practitioners must iteratively refine in tandem with emerging AI incidents. The paper underscores the need for enhanced education and awareness among AI practitioners regarding the holistic perspective on privacy that considers AI-specific risks.

In conclusion, by articulating how AI changes the privacy risk paradigm, Lee et al. provide a critical foundation for AI researchers and practitioners to both anticipate and address the unique challenges introduced by integrating AI into everyday applications. This taxonomy serves as a pivotal resource in the ongoing endeavor to responsibly innovate in AI while safeguarding individual privacy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (152)
  1. 2012. Face to Face: Physiognomy & Phrenology THE SHELF. https://blogs.harvard.edu/preserving/2012/09/24/face-to-face-physiognomy-phrenology/
  2. 2018. Amazon patents ’voice-sniffing’ algorithms. BBC News (April 2018). https://www.bbc.com/news/technology-43725708
  3. 2019. England’s Keele University Neglects Patient Consent Regulations and Uses YouTube Videos to Study Autism in Children. https://www.trialsitenews.com/a/englands-keele-university-neglects-patient-consent-regulations-and-uses-youtube-videos-to-study-autism-in-children
  4. 2021a. Deepfake porn case suspect is released on bail - Taipei Times. https://www.taipeitimes.com/News/front/archives/2021/10/20/2003766430 Section: Front Page.
  5. 2021b. Myanmar: Facial Recognition System Threatens Rights. https://www.hrw.org/news/2021/03/12/myanmar-facial-recognition-system-threatens-rights
  6. 2021c. UW-Madison disables proctoring software amid complaints. https://apnews.com/article/technology-madison-wisconsin-education-software-90a41fa6fa5348d837efbbd3be3a88f3
  7. Lawrence Abrams. 2020. ProctorU confirms data breach after database leaked online. https://www.bleepingcomputer.com/news/security/proctoru-confirms-data-breach-after-database-leaked-online/
  8. A Review of Smart Homes—Past, Present, and Future. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 6 (Nov. 2012), 1190–1203. https://doi.org/10.1109/TSMCC.2012.2189204 Conference Name: IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
  9. Irwin Altman. 1975. The environment and social behavior: Privacy, personal space, territory, crowding (first printing edition ed.). Brooks/Cole Pub. Co, Monterey, Calif.
  10. Katherine Anne Long. 2021. Amazon and Microsoft team up to defend against facial recognition lawsuits. https://www.seattletimes.com/business/technology/facial-recognition-lawsuits-against-amazon-and-microsoft-can-proceed-judge-rules/
  11. Blaise Aguera y Arcas. 2017. Physiognomy’s New Clothes. https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a
  12. Assuring the Machine Learning Lifecycle. ACM Computing Surveys (CSUR) 54 (5 2021). Issue 5. https://doi.org/10.1145/3453444
  13. Rana Ayyub. 2018. In India, journalists face slut-shaming and rape threats. New York Times 22 (2018).
  14. Chris Baraniuk. 2018. Exclusive: UK police wants AI to stop violent crime before it happens. https://www.newscientist.com/article/2186512-exclusive-uk-police-wants-ai-to-stop-violent-crime-before-it-happens/
  15. A Data Privacy Taxonomy. In Dataspace: The Final Frontier (Lecture Notes in Computer Science), Alan P. Sexton (Ed.). Springer, Berlin, Heidelberg, 42–54. https://doi.org/10.1007/978-3-642-02843-4_7
  16. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (June 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  17. Lois Beckett. 2019. Under digital surveillance: how American schools spy on millions of kids. The Guardian (Oct. 2019). https://www.theguardian.com/world/2019/oct/22/school-student-surveillance-bark-gaggle
  18. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. https://doi.org/10.48550/arXiv.1810.01943 arXiv:1810.01943 [cs].
  19. Multimodal datasets: misogyny, pornography, and malignant stereotypes. http://arxiv.org/abs/2110.01963 arXiv:2110.01963 [cs].
  20. BRENDAN BORDELON. 2023. ’We better figure it out’: The politics trap that could slow a national AI law. https://www.politico.com/news/2023/05/19/ai-old-social-media-sam-altman-00097792
  21. A review of privacy-preserving techniques for deep learning. Neurocomputing 384 (4 2020), 21–45. https://doi.org/10.1016/J.NEUCOM.2019.11.041
  22. Louis Brandeis and Samuel Warren. 1890. The right to privacy. Harvard law review 4, 5 (1890), 193–220.
  23. Thomas Brewster. 2021. A $2 Billion Government Surveillance Lab Created Tech That Guesses Your Name By Simply Looking At Your Face. https://www.forbes.com/sites/thomasbrewster/2021/04/08/a-2-billion-government-surveillance-lab-created-tech-that-guesses-your-name-by-simply-looking-at-your-face/?sh=5842d9b76b1f
  24. M. Burgess. 2021. The Biggest Deepfake Abuse Site Is Growing in Disturbing Ways. WIRED.
  25. Balancing Utility and Fairness against Privacy in Medical Data. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). 1226–1233. https://doi.org/10.1109/SSCI47803.2020.9308226
  26. Cascade: crowdsourcing taxonomy creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Paris France, 1999–2008. https://doi.org/10.1145/2470654.2466265
  27. Embarrassing Exposures in Online Social Networks: An Integrated Perspective of Privacy Invasion and Relationship Bonding. Information Systems Research 26, 4 (2015), 675–694. https://www.jstor.org/stable/24700367 Publisher: INFORMS.
  28. Learning temporal coherence via self-supervision for GAN-based video generation. ACM Transactions on Graphics 39, 4 (Aug. 2020), 75:75:1–75:75:13. https://doi.org/10.1145/3386569.3392457
  29. Privacy Harms. Boston University Law Review (2022).
  30. Joseph Cox. 2023. AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse. https://www.vice.com/en/article/dy7mww/ai-voice-firm-4chan-celebrity-voices-emma-watson-joe-rogan-elevenlabs
  31. Joseph Cox and Jason Koebler. 2021. Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition. https://www.vice.com/en/article/wx83bz/verkada-hacked-facial-recognition-customers
  32. Designing, developing, and deploying artificial intelligence systems: Lessons from and for the public sector. Business Horizons 63, 2 (March 2020), 205–213. https://doi.org/10.1016/j.bushor.2019.11.004
  33. Benj Edwards. 2022. Artist finds private medical record photos in popular AI training data set. https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/
  34. Stephen Eick and Annie I. Anton. 2020. Enhancing Privacy in Robotics via Judicious Sensor Selection. In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Paris, France, 7156–7165. https://doi.org/10.1109/ICRA40945.2020.9196983
  35. Exploring the Utility Versus Intrusiveness of Dynamic Audience Selection on Facebook. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 1–30. https://doi.org/10.1145/3476083
  36. Ella Fassler. 2021. South Korea Is Giving Millions of Photos to Facial Recognition Researchers. https://www.vice.com/en/article/xgdxqd/south-korea-is-selling-millions-of-photos-to-facial-recognition-researchers
  37. This App Claims It Can Detect ’Trustworthiness.’ It Can’t. https://www.vice.com/en/article/akd4bg/this-app-claims-it-can-detect-trustworthiness-it-cant
  38. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. https://doi.org/10.2139/ssrn.3518482
  39. Shaun Nichols in San Francisco. 2017. TV anchor says live on-air ’Alexa, order me a dollhouse’ – guess what happens next. https://www.theregister.com/2017/01/07/tv_anchor_says_alexa_buy_me_a_dollhouse_and_she_does/
  40. Batya Friedman and Helen Nissenbaum. 1997. Software agents and user autonomy. In Proceedings of the first international conference on Autonomous agents (AGENTS ’97). Association for Computing Machinery, New York, NY, USA, 466–469. https://doi.org/10.1145/267658.267772
  41. Gennie Gebhart. 2016. Google’s Allo Sends The Wrong Message About Encryption. https://www.eff.org/deeplinks/2016/09/googles-allo-sends-wrong-message-about-encryption
  42. Dave Gershgorn. 2021. GitHub and OpenAI launch a new AI tool that generates its own code. https://www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code
  43. Global Times. 2021. Xpeng apologizes for illegal collection of facial images after penalty - Global Times. https://www.globaltimes.cn/page/202112/1241489.shtml
  44. Eileen Guo. 2022. A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? https://www.technologyreview.com/2022/12/19/1065306/roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy/
  45. How Does Usable Security (Not) End Up in Software Products? Results From a Qualitative Interview Study. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, San Francisco, CA, USA, 893–910. https://doi.org/10.1109/SP46214.2022.9833756
  46. Thilo Hagendorff. 2020. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines 30, 1 (March 2020), 99–120. https://doi.org/10.1007/s11023-020-09517-8
  47. Maggie Harrison. [n. d.]. Startup Shocked When 4Chan Immediately Abuses Its Voice-Cloning AI. https://futurism.com/startup-4chan-voice-cloning-ai
  48. Adam Harvey and Jules. LaPlace. 2021. Exposing.ai. https://exposing.ai
  49. Adam Harvey and LaPlace, Jules. 2021. Exposing.ai: People in Photo Albums. https://exposing.ai/datasets/pipa/
  50. Drew Harwell. 2018. Wanted: The ‘perfect babysitter.’Must pass AI scan for respect and attitude. Washington Post 23 (2018).
  51. Drew Harwell. 2019. A face-scanning algorithm increasingly decides whether you deserve the job. In Ethics of Data and Analytics. Auerbach Publications, 206–211.
  52. Kashmir Hill. 2020. The secretive company that might end privacy as we know it. In Ethics of Data and Analytics. Auerbach Publications, 170–177.
  53. Camilla Hodgson. 2019. Fast-food chains consider trying license plate recognition in drive-throughs. https://www.latimes.com/business/la-fi-license-plate-recognition-drive-through-restaurant-20190711-story.html Section: Business.
  54. Hal Hodson. 2016. Revealed: Google AI has access to huge haul of NHS patient data. https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data/
  55. Tatum Hunter and Heather Kelly. 2022. With Roe overturned, period-tracking apps raise new worries. Washington Post (Aug. 2022). https://www.washingtonpost.com/technology/2022/05/07/period-tracking-privacy/
  56. A Taxonomy of user-perceived privacy risks to foster accountability of data-based services. Journal of Responsible Technology 10 (July 2022), 100029. https://doi.org/10.1016/j.jrt.2022.100029
  57. Heesoo Jang. 2021. A South Korean Chatbot Shows Just How Sloppy Tech Companies Can Be With User Data. Slate (April 2021). https://slate.com/technology/2021/04/scatterlab-lee-luda-chatbot-kakaotalk-ai-privacy.html
  58. Anniek Jansen and Sara Colombo. 2023. Mix & Match Machine Learning: An Ideation Toolkit to Design Machine Learning-Enabled Solutions. In Proceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, Warsaw Poland, 1–18. https://doi.org/10.1145/3569009.3572739
  59. The Case of the Creepy Algorithm That ‘Predicted’Teen Pregnancy, Wired (2022). URL: https://www. wired. com/story/argentina-algorithms-pregnancy-prediction (2022).
  60. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (Sept. 2019), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  61. Poppy Johnston. 2022. Banned from Airbnb with no explanation? Here’s why. https://au.finance.yahoo.com/news/banned-from-airbnb-023208437.html
  62. Sayash Kapoor and Narayanan, Arvind. 2023. AI Snake Oil. https://www.aisnakeoil.com/
  63. “Why Do I Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts. In Designing Interactive Systems Conference. ACM, Virtual Event Australia, 454–470. https://doi.org/10.1145/3532106.3533556
  64. Kate Kaye. 2022. Class tests Intel AI to monitor student emotions on Zoom - Protocol. https://www.protocol.com/enterprise/emotion-ai-school-intel-edutech
  65. ”There will be less privacy, of course”: How and why people in 10 countries expect {AI} will affect privacy in the future. 579–603. https://www.usenix.org/conference/soups2023/presentation/kelley
  66. Understanding Frontline Workers’ and Unhoused Individuals’ Perspectives on AI Used in Homeless Services. https://doi.org/10.1145/3544548.3580882 arXiv:2303.09743 [cs].
  67. Marc Langheinrich. 2001. Privacy by Design — Principles of Privacy-Aware Ubiquitous Systems. In Ubicomp 2001: Ubiquitous Computing, Gerhard Goos, Juris Hartmanis, Jan van Leeuwen, Gregory D. Abowd, Barry Brumitt, and Steven Shafer (Eds.). Vol. 2201. Springer Berlin Heidelberg, Berlin, Heidelberg, 273–291. https://doi.org/10.1007/3-540-45427-6_23 Series Title: Lecture Notes in Computer Science.
  68. ”I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products. In 33nd USENIX Security Symposium (USENIX Security 24). USENIX Association, Philadelphia, PA.
  69. Radhamely De Leon. 2021. ‘Roadrunner’ Director Deepfaked Anthony Bourdain’s Voice. https://www.vice.com/en/article/m7e54b/roadrunner-director-deepfaked-anthony-bourdains-voice
  70. Sam Levin. 2017a. LGBT groups denounce ’dangerous’ AI that uses your face to guess sexuality. The Guardian (Sept. 2017). https://www.theguardian.com/world/2017/sep/08/ai-gay-gaydar-algorithm-facial-recognition-criticism-stanford
  71. Sam Levin. 2017b. New AI can guess whether you’re gay or straight from a photograph. The Guardian (Sept. 2017). https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph
  72. Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine 37, 3 (2020), 50–60. ISBN: 1053-5888 Publisher: IEEE.
  73. When Machine Learning Meets Privacy: A Survey and Outlook. Comput. Surveys 54 (11 2020). Issue 2. https://doi.org/10.1145/3436755
  74. Privacy and Security Issues in Deep Learning: A Survey. IEEE Access 9 (2021), 4566–4593. https://doi.org/10.1109/ACCESS.2020.3045078
  75. ”I don’t know how to protect myself”: Understanding Privacy Perceptions Resulting from the Presence of Bystanders in Smart Environments. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. ACM, Tallinn Estonia, 1–11. https://doi.org/10.1145/3419249.3420164
  76. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 81:1–81:32. https://doi.org/10.1145/3359183
  77. Logan Ryan Mac McDonald, Caroline Haskins. 2020. Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA. https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement Section: Tech.
  78. Cade Metz. 2019. Facial Recognition Tech Is Growing Stronger, Thanks to Your Face. The New York Times (July 2019). https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html
  79. Nathaniel Meyersohn. 2022. Walgreens replaced some fridge doors with screens. And some shoppers absolutely hate it | CNN Business. https://www.cnn.com/2022/03/12/business/walgreens-freezer-screens/index.html
  80. Thinking responsibly about responsible AI and ‘the dark side’ of AI. European Journal of Information Systems 31, 3 (May 2022), 257–268. https://doi.org/10.1080/0960085X.2022.2026621 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/0960085X.2022.2026621.
  81. Dan Milmo. 2021. Amazon asks Ring owners to respect privacy after court rules usage broke law. The Guardian (Oct. 2021). https://www.theguardian.com/uk-news/2021/oct/14/amazon-asks-ring-owners-to-respect-privacy-after-court-rules-usage-broke-law
  82. Hanako Montgomery. 2021. Man Arrested for Uncensoring Japanese Porn With AI in First Deepfake Case. https://www.vice.com/en/article/xgdq87/deepfakes-japan-arrest-japanese-porn
  83. Paul Mozur. 2018. Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras. The New York Times (July 2018). https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html
  84. Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (Dec. 2016). https://doi.org/10.1098/rsta.2016.0118 Publisher: The Royal Society.
  85. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 634–646.
  86. Sophie J. Nightingale and Hany Farid. 2022. AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences 119, 8 (Feb. 2022), e2120481119. https://doi.org/10.1073/pnas.2120481119 Publisher: Proceedings of the National Academy of Sciences.
  87. Helen Nissenbaum. 2004. Privacy as Contextual Integrity. Washington Law Review 79, 1 (Feb. 2004), 119. https://digitalcommons.law.uw.edu/wlr/vol79/iss1/10
  88. Maria Noriega. 2020. The application of artificial intelligence in police interrogations: An analysis addressing the proposed effect AI has on racial and gender bias, cooperation, and false confessions. Futures 117 (March 2020), 102510. https://doi.org/10.1016/j.futures.2019.102510
  89. José Bernardi S. Nunes and Andrey Brito. 2023. A taxonomy on privacy and confidentiality. In Proceedings of the 11th Latin-American Symposium on Dependable Computing (LADC ’22). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3569902.3569903
  90. Jonathan A Obar. 2020. Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society 7, 1 (2020), 2053951720935615.
  91. Andrew O’Hara. 2021. Amazon Halo review: incredibly invasive, but helps you learn about yourself. https://appleinsider.com/articles/20/12/02/review-amazon-halo-is-incredibly-invasive-but-helps-you-learn-about-yourself
  92. Security and Privacy for Artificial Intelligence: Opportunities and Challenges; Security and Privacy for Artificial Intelligence: Opportunities and Challenges. J. ACM 37 (2020). Issue 4. https://doi.org/10.1145/1122445.1122456
  93. Google PAIR. 2019. People + AI Guidebook. (2019).
  94. Leysia Palen and Paul Dourish. 2003. Unpacking ”privacy” for a networked world. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03). Association for Computing Machinery, New York, NY, USA, 129–136. https://doi.org/10.1145/642611.642635
  95. Rachel Pannett. 2022. German police used a tracing app to scout crime witnesses. Some fear that’s fuel for covid conspiracists. Washington Post (Jan. 2022). https://www.washingtonpost.com/world/2022/01/13/german-covid-contact-tracing-app-luca/
  96. Minwoo Park. 2021. Seoul using AI to detect and prevent suicide attempts on bridges. Reuters (June 2021). https://www.reuters.com/world/asia-pacific/seoul-using-ai-detect-prevent-suicide-attempts-bridges-2021-06-30/
  97. Kari Paul. 2019. Google workers can listen to what people say to its AI home devices. The Guardian (July 2019). https://www.theguardian.com/technology/2019/jul/11/google-home-assistant-listen-recordings-users-privacy
  98. Charlie Pownall. 2023. AI, Algorithmic and Automation Incident and Controversy Repository (AIAAIC). https://www.aiaaic.org/
  99. Peter N. Henderson · The Canadian Press ·. 2015. Checked ’How Old Do I Look?’ You gave Microsoft rights to your photo | CBC News. https://www.cbc.ca/news/science/how-old-do-i-look-microsoft-website-raises-privacy-concerns-1.3062176
  100. QTCinderella [@qtcinderella]. 2023. I want to scream. Stop. Everybody fucking stop. Stop spreading it. Stop advertising it. Stop. Being seen “naked” against your will should NOT BE A PART OF THIS JOB. Thank you to all the male internet “journalists” reporting on this issue. Fucking losers @HUN2R. https://twitter.com/qtcinderella/status/1620142227250094080
  101. Iyad Rahwan. 2018. Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology 20, 1 (March 2018), 5–14. https://doi.org/10.1007/s10676-017-9430-8
  102. Megha Rajagopalan. 2018. China Is Said To Be Using A Database To Identify And Detain People As Potential Threats. https://www.buzzfeednews.com/article/meghara/human-rights-watch-china-using-big-data-to-detain-people Section: World.
  103. The Fallacy of AI Functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 959–972. https://doi.org/10.1145/3531146.3533158
  104. Reuters. 2018. China school using artificial intelligence to detect unfocused students. https://nypost.com/2018/05/17/china-is-using-ai-to-keep-high-school-students-in-line/
  105. Mark O Riedl. 2019. Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies 1, 1 (2019), 33–36.
  106. Juliette Rihl. 2021. Emails show Pittsburgh police officers accessed Clearview facial recognition after BLM protests. http://www.publicsource.org/pittsburgh-police-facial-recognition-blm-protests-clearview/
  107. Charles Rollet. 2021. PRC Minority Ethnicity Recognition Research Targeted Uyghurs, Breached Ethical Standards. https://ipvm.com/reports/eth-rec-ethics
  108. Nithya Sambasivan and Jess Holbrook. 2018. Toward responsible AI for the next billion users. Interactions 26 (Dec. 2018), 68–71. https://doi.org/10.1145/3298735
  109. Sam Schechner and Mark Secada. 2019. You Give Apps Sensitive Personal Information. Then They Tell Facebook. Wall Street Journal (Feb. 2019). https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-facebook-11550851636
  110. LAION-5B: An open large-scale dataset for training next generation image-text models. https://doi.org/10.48550/arXiv.2210.08402 arXiv:2210.08402 [cs].
  111. A Survey of Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life Cycle. IEEE Access (2023). https://doi.org/10.1109/ACCESS.2023.3287195
  112. Laura Courchesne Isra Thange Jacob N. Jon Bateman Shapiro, Elonnai Hickok. 2021. Measuring the Effects of Influence Operations: Key Findings and Gaps From Empirical Research. https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824
  113. Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. Proceedings of the ACM Conference on Computer and Communications Security 2015-October (10 2015), 1310–1321. https://doi.org/10.1145/2810103.2813687
  114. Tom Simonite. 2017. Facebook Can Now Find Your Face, Even When It’s Not Tagged. Wired (Dec. 2017). https://www.wired.com/story/facebook-will-find-your-face-even-when-its-not-tagged/ Section: tags.
  115. John Smith. 2019. IBM Research Releases ’Diversity in Faces’ Dataset to Advance Study of Fairness in Facial Recognition Systems. https://www.ibm.com/blogs/research/2019/01/diversity-in-faces/
  116. Daniel J. Solove. 2006. A Taxonomy of Privacy. University of Pennsylvania Law Review 154, 3 (Jan. 2006), 477. https://doi.org/10.2307/40041279
  117. Matthias Spielkamp. 2019. Automating Society: Taking Stock of Automated Decision-Making in the EU. BertelsmannStiftung Studies 2019. (2019).
  118. Intriguing properties of neural networks. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings (12 2013). https://arxiv.org/abs/1312.6199v4
  119. Privacy Champions in Software Teams: Understanding Their Motivations, Strategies, and Challenges. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–15. https://doi.org/10.1145/3411764.3445768
  120. The Local. 2020. Spain’s Mercadona supermarkets install facial recognition systems to keep thieves at bay. https://www.thelocal.es/20200702/spains-mercadona-supermarkets-install-facial-recognition-systems-to-keep-thieves-at-bay
  121. Daniel W. Tigard. 2021. Responsible AI and moral responsibility: a common appreciation. AI and Ethics 1, 2 (May 2021), 113–117. https://doi.org/10.1007/s43681-020-00009-0
  122. Sixth Tone. 2019. Camera Above the Classroom. https://www.sixthtone.com/news/1003759
  123. Hayley Tsukayama. 2021. Gmail’s Inbox app will now write (some of) your e-mails for you. Washington Post (Dec. 2021). https://www.washingtonpost.com/news/the-switch/wp/2015/11/03/gmails-inbox-app-will-now-write-some-of-your-e-mails-for-you/
  124. William Turton. 2021. Hackers breach thousands of security cameras, exposing tesla, jails, hospitals. Bloomberg. Available online at: https://www. bloomberg. com/news/articles/2021-03-09/hackersexpose-tesla-jails-in-breach-of-150-000-security-cams (2021).
  125. Smart, useful, scary, creepy: perceptions of online behavioral advertising. In Proceedings of the Eighth Symposium on Usable Privacy and Security (SOUPS ’12). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/2335356.2335362
  126. Taxonomies in software engineering: A Systematic mapping study and a revised taxonomy development method. Information and Software Technology 85 (May 2017), 43–59. https://doi.org/10.1016/j.infsof.2017.01.006
  127. Chris Velazco. 2021. Amazon’s ’Mentor’ tracking software has been screwing drivers for years. https://www.engadget.com/amazon-mentor-app-edriving-delivery-driver-tracking-surveillance-gps-194346304.html
  128. James Vincent. 2022. Binance executive claims scammers made a deepfake of him-the verge.
  129. Lessons learn on responsible AI implementation: the ASSISTANT use case. IFAC-PapersOnLine 55, 10 (Jan. 2022), 377–382. https://doi.org/10.1016/j.ifacol.2022.09.422
  130. Jane Wakefield. 2021a. Amazon faces spying claims over AI cameras in vans. BBC News (Feb. 2021). https://www.bbc.com/news/technology-55938494
  131. Jane Wakefield. 2021b. Neighbour wins privacy row over smart doorbell and cameras. BBC News (Oct. 2021). https://www.bbc.com/news/technology-58911296
  132. Ari Ezra Waldman. [n. d.]. DESIGNING WITHOUT PRIVACY. HOUSTON LAW REVIEW ([n. d.]).
  133. Peter Walker. 2021. Call centre staff to be monitored via webcam for home-working ‘infractions’. The Guardian (March 2021). https://www.theguardian.com/business/2021/mar/26/teleperformance-call-centre-staff-monitored-via-webcam-home-working-infractions
  134. Yilun Wang and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology 114, 2 (2018), 246.
  135. SoK: A Framework for Unifying At-Risk User Research. Proceedings - IEEE Symposium on Security and Privacy 2022-May (2022), 2344–2360. https://doi.org/10.1109/SP46214.2022.9833643
  136. This person (probably) exists. identity membership attacks against gan generated faces. arXiv preprint arXiv:2107.06018 (2021).
  137. Alan F Westin. 1967. Privacy And Freedom.
  138. Trustworthy AI Development Guidelines for Human System Interaction. Proc. 13th Int. Conf. Hum. Syst. Interact (HSI), 130–136.
  139. Kyle Wiggers. 2021. AI datasets are prone to mismanagement, study finds. https://venturebeat.com/ai/ai-datasets-are-prone-to-mismanagement-study-finds/
  140. Emma Woollacott. 2016. 70,000 OkCupid Profiles Leaked, Intimate Details And All. https://www.forbes.com/sites/emmawoollacott/2016/05/13/intimate-data-of-70000-okcupid-users-released/ Section: Tech.
  141. Xiaolin Wu and Xi Zhang. 2016. Automated inference on criminality using face images. arXiv preprint arXiv:1611.04135 (2016), 4038–4052.
  142. The Slow Violence of Surveillance Capitalism. FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (6 2023), 1826–1837. https://doi.org/10.1145/3593013.3594119
  143. Vicky Xiuzhong Xu and Bang Xiao. 2018. Chinese authorities use facial recognition, public shaming to crack down on jaywalking, criminals. ABC News 20 (2018).
  144. Artificial intelligence: A powerful paradigm for scientific research. The Innovation 2, 4 (2021), 100179. https://doi.org/10.1016/j.xinn.2021.100179
  145. Investigating How Experienced UX Designers Effectively Work with Machine Learning. In Proceedings of the 2018 Designing Interactive Systems Conference. ACM, Hong Kong China, 585–596. https://doi.org/10.1145/3196709.3196730
  146. How Experienced Designers of Enterprise Applications Engage AI as a Design Material. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–13. https://doi.org/10.1145/3491102.3517491
  147. Creating Design Resources to Scaffold the Ideation of AI Concepts. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. ACM, Pittsburgh PA USA, 2326–2346. https://doi.org/10.1145/3563657.3596058
  148. Towards a multi-stakeholder value-based assessment framework for algorithmic systems. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 535–563. https://doi.org/10.1145/3531146.3533118
  149. Beyond frontal faces: Improving Person Recognition using multiple cues. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), 4804–4813.
  150. Phoebe Zhang. 2019. Chinese university says new classroom facial recognition system will improve attendance | South China Morning Post. https://www.scmp.com/news/china/science/article/3025329/watch-and-learn-chinese-university-says-new-classroom-facial
  151. Michael Zimmer. 2016. OkCupid Study Reveals the Perils of Big-Data Science. Wired (May 2016). https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-science/ Section: tags.
  152. Shoshana Zuboff. 2019. The Age of Surveillance Capitalism : The Fight for a Human Future at the New Frontier of Power. 691 pages.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hao-Ping Lee (3 papers)
  2. Yu-Ju Yang (2 papers)
  3. Thomas Serban von Davier (2 papers)
  4. Jodi Forlizzi (12 papers)
  5. Sauvik Das (13 papers)
Citations (32)
Youtube Logo Streamline Icon: https://streamlinehq.com