The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment (2404.13802v2)
Abstract: As more algorithmic systems have come under scrutiny for their potential to inflict societal harms, an increasing number of organizations that hold power over harmful algorithms have chosen (or were required under the law) to abandon them. While social movements and calls to abandon harmful algorithms have emerged across application domains, little academic attention has been paid to studying abandonment as a means to mitigate algorithmic harms. In this paper, we take a first step towards conceptualizing "algorithm abandonment" as an organization's decision to stop designing, developing, or using an algorithmic system due to its (potential) harms. We conduct a thematic analysis of real-world cases of algorithm abandonment to characterize the dynamics leading to this outcome. Our analysis of 40 cases reveals that campaigns to abandon an algorithm follow a common process of six iterative phases: discovery, diagnosis, dissemination, dialogue, decision, and death, which we term the "6 D's of abandonment". In addition, we highlight key factors that facilitate (or prohibit) abandonment, which include characteristics of both the technical and social systems that the algorithm is embedded within. We discuss implications for several stakeholders, including proprietors and technologists who have the power to influence an algorithm's (dis)continued use, FAccT researchers, and policymakers.
- [n. d.]. AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository. Retrieved January 18, 2024 from https://www.aiaaic.org/aiaaic-repository
- [n. d.]. Galactica Demo — galactica.org. https://galactica.org/. [Accessed 22-01-2024].
- [n. d.]. New liver transplant rules yield winners, losers as wasted organs reach record high — washingtonpost.com. https://www.washingtonpost.com/business/2023/03/21/liver-transplants-acuity-circle-policy/. [Accessed 22-01-2024].
- AlgorithmWatch 2020. How Dutch activists got an invasive fraud detection algorithm banned. AlgorithmWatch. Retrieved January 16, 2024 from https://algorithmwatch.org/en/syri-netherlands-algorithm/ Email newsletter.
- Logic(s) Magazine 2020. Safe or Just Surveilled?: Tawana Petty on the Fight Against Facial Recognition Surveillance. Logic(s) Magazine. Retrieved January 16, 2024 from https://logicmag.io/security/safe-or-just-surveilled-tawana-petty-on-facial-recognition/ Interview.
- Stop LAPD Spying Coalition (Ed.). 2021. The Ghosts of White Supremacy in AI Reform. https://ainowinstitute.org/publication/a-new-ai-lexicon-surveillance
- Amnesty International (Ed.). 2022. Ban the Scan. Retrieved January 16, 2024 from https://banthescan.amnesty.org/
- Eticas 2022. The External Audit of the VioGen System. Eticas. https://eticasfoundation.org/wp-content/uploads/2022/03/ETICAS-FND-The-External-Audit-of-the-VioGen-System.pdf
- The Associated Press (Ed.). 2022. Oregon is dropping an artificial intelligence tool used in child welfare system. Retrieved January 16, 2024 from https://www.npr.org/2022/06/02/1102661376/oregon-drops-artificial-intelligence-child-abuse-cases
- AI Now Institute (Ed.). 2023. Algorithmic Accountability: Moving Beyond Audits. Retrieved January 16, 2024 from https://ainowinstitute.org/publication/algorithmic-accountability
- Electronic Frontier Foundation 2023. FOIA How To. Electronic Frontier Foundation. Retrieved January 16, 2024 from https://www.eff.org/issues/transparency/foia-how-to
- International Association of Privacy Associates 2023. Global AI Legislation Tracker. International Association of Privacy Associates. Retrieved January 16, 2024 from https://iapp.org/resources/article/global-ai-legislation-tracker/
- Duane Morris Government Strategies (Ed.). 2023. Regulating Artificial Intelligence In Mental Health: States’ Attempts To Revolutionize Mental Health Services. Retrieved January 16, 2024 from https://statecapitallobbyist.com/artificial-intelligence-ai/regulating-artificial-intelligence-in-mental-health-services-states-attempts-to-revolutionize-mental-health-services/
- The U.S. Federal Trade Commission 2023. Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards. The U.S. Federal Trade Commission. Retrieved January 16, 2024 from https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without Press Release.
- Common Cause & Lokniti – Centre for the Study Developing Societies (CSDS) 2023. Status of Policing in India Report 2023: Surveillance and the Question of Privacy. Common Cause & Lokniti – Centre for the Study Developing Societies (CSDS). Retrieved January 16, 2024 from https://www.commoncause.in/wotadmin/upload/REPORT_2023.pdf
- Lucy Parsons Labs 2023. Who We Are. Lucy Parsons Labs. Retrieved January 16, 2024 from https://lucyparsonslabs.com/about/
- Parag Agrawal and Dantley Davis. 2020. Transparency around image cropping and changes to come. https://blog.twitter.com/en_us/topics/product/2020/transparency-image-cropping
- Sarah Ahmed. 2017. No. Retrieved January 16, 2024 from https://feministkilljoys.com/2017/06/30/no/
- Ali Alkhatib and Michael Bernstein. 2019. Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300760
- Conceptualizing Algorithmic Stigmatization. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany,) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 373, 18 pages. https://doi.org/10.1145/3544548.3580970
- Laboratorio de Inteligencia Artificial Aplicada. 2018. Sobre la predicción automática de embarazos adolescentes.
- One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:1909.03012 [cs.AI]
- Fairness and Machine Learning: Limitations and Opportunities. MIT Press.
- Eric P.S. Baumer and M. Six Silberman. 2011. When the Implication is Not to Design (Technology). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (, Vancouver, BC, Canada,) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2271–2274. https://doi.org/10.1145/1978942.1979275
- AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arXiv:1810.01943 [cs.AI]
- Algorithmic Bias Playbook. https://www.ftc.gov/system/files/documents/public_events/1582978/algorithmic-bias-playbook.pdf
- Ruha Benjamin. 2016. Informed Refusal: Toward a Justice-based Bioethics. Science, Technology, & Human Values 41, 6 (2016), 967–990. https://doi.org/10.1177/0162243916656059 arXiv:https://doi.org/10.1177/0162243916656059
- Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new Jim code. Polity.
- Johana Bhuiyan. 2021. LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws. Retrieved January 16, 2024 from https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
- Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa arXiv:https://www.tandfonline.com/doi/pdf/10.1191/1478088706qp063oa
- Rachel Bukowitz and Tim O’Loughlin. 2022. Governing by Algorithm? Child Protection in Aotearoa New Zealand. https://anzsog.edu.au/research-insights-and-resources/research/governing-by-algorithm-child-protection-in-aotearoa-new-zealand/
- bundesverfassungsgericht. 2023. Bundesverfassungsgericht - Press - Legislation in Hesse and Hamburg regarding automated data analysis for the prevention of criminal acts is unconstitutional. https://www.bundesverfassungsgericht.de/SharedDocs/Pressemitteilungen/EN/2023/bvg23-018.html
- Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
- Joy Buolamwini and Timnit Gebru. 2023. Algorithmic Justice League Gender Shades 5th Anniversary Celebration. (2023). https://www.youtube.com/watch?v=8JSxbZyivuE Virtual discussion (recorded).
- Jenna Burrell and Deirdre Mulligan. 2020. The Berkeley Algorithmic Fairness and Opacity Group Refusal Conference. Retrieved January 16, 2024 from https://afog.berkeley.edu/programs/the-refusal-conference
- Ryan Calo and Danielle Keats Citron. 2021. The Automated Administrative State: A Crisis of Legitimacy. Emory Law Journal (2021). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3553590
- Zeph Capo and Janet Bass. 2017. Federal Suit Settlement: End of Value-Added Measures for Teacher Termination in Houston — American Federation of Teachers. https://www.aft.org/press-release/federal-suit-settlement-end-value-added-measures-teacher-termination-houston
- Non-participation in digital media: toward a framework of mediated political action. Media, Culture & Society 37, 6 (2015), 850–866. https://doi.org/10.1177/0163443715584098 arXiv:https://doi.org/10.1177/0163443715584098
- Cindy Chang. 2018. LAPD officials defend predictive policing as activists call for its end. Retrieved January 16, 2024 from https://www.latimes.com/local/lanow/la-me-lapd-data-policing-20180724-story.html
- Kyle Chayka. 2023. Rethinking the Luddites in the Age of AI. Retrieved January 16, 2024 from https://www.newyorker.com/books/page-turner/rethinking-the-luddites-in-the-age-of-ai
- How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA,) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 162, 22 pages. https://doi.org/10.1145/3491102.3501831
- Lingwei Cheng and Alexandra Chouldechova. 2022. Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening. 6, CSCW2, Article 376 (nov 2022), 33 pages. https://doi.org/10.1145/3555101
- Rumman Chowdhury. 2021. Sharing learnings about our image cropping algorithm. https://blog.twitter.com/engineering/en_us/topics/insights/2021/sharing-learnings-about-our-image-cropping-algorithm
- Feminist Data Manifest-No. Retrieved January 16, 2024 from https://www.manifestno.com/
- Richard Conniff. 2011. What the Luddites Really Fought Against. Retrieved January 16, 2024 from https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/
- Jeffrey Dastin. [n. d.]. Insight - Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/. [Accessed 22-01-2024].
- Jeffrey Dastin. 2018. Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved January 16, 2024 from https://www.reuters.com/article/idUSKCN1MK0AG/
- Jeffrey Dastin. 2020. Rite Aid deployed facial recognition systems in hundreds of U.S. stores. Retrieved January 16, 2024 from https://www.reuters.com/investigates/special-report/usa-riteaid-software/
- Algorithmic reparation. Big Data & Society 8, 2 (2021), 20539517211044808. https://doi.org/10.1177/20539517211044808 arXiv:https://doi.org/10.1177/20539517211044808
- A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020). https://api.semanticscholar.org/CorpusID:211171909
- Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 473–484. https://doi.org/10.1145/3531146.3533113
- Building, Shifting, & Employing Power: A Framework for Understanding & Empowering Action From Below in Response to Algorithmic Harm. In submission (2023).
- Megan Rose Dickey. 2020. Twitter and Zoom’s algorithmic bias issues. https://techcrunch.com/2020/09/21/twitter-and-zoom-algorithmic-bias-issues/
- Social Justice-Oriented Interaction Design: Outlining Key Design Strategies and Commitments. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (Brisbane, QLD, Australia) (DIS ’16). Association for Computing Machinery, New York, NY, USA, 656–671. https://doi.org/10.1145/2901790.2901861
- Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608 [stat.ML]
- Fairness through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (Cambridge, Massachusetts) (ITCS ’12). Association for Computing Machinery, New York, NY, USA, 214–226. https://doi.org/10.1145/2090236.2090255
- The Algorithmic Imprint. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1305–1317. https://doi.org/10.1145/3531146.3533186
- Elizabeth Elizalde and Michael Gartland. 2019. Brooklyn tenants in rent-regulated apartments push state to nix landlord’s facial recognition software. Retrieved January 16, 2024 from https://www.nydailynews.com/2019/05/01/brooklyn-tenants-in-rent-regulated-apartments-push-state-to-nix-landlords-facial-recognition-software/
- Ritchie Eppink. 2023. Testimony of Ritchie Eppink: AI in Government United States Senate Committee on Homeland Security & Government Affairs. Retrieved January 16, 2024 from https://www.hsgac.senate.gov/wp-content/uploads/Testimony-Eppink-2023-05-16-1.pdf
- Thomas Erdbrink. 2021. Government in Netherlands Resigns After Benefit Scandal. Retrieved January 16, 2024 from https://www.nytimes.com/2021/01/15/world/europe/dutch-government-resignation-rutte-netherlands.html
- “Be Careful; Things Can Be Worse than They Appear”: Understanding Biased Algorithms and Users’ Behavior Around Them in Rating Platforms. Proceedings of the International AAAI Conference on Web and Social Media 11, 1 (May 2017), 62–71. https://doi.org/10.1609/icwsm.v11i1.14898
- Virginia Eubanks. 2018. A response to Allegheny County DHS. https://virginia-eubanks.com/2018/02/16/a-response-to-allegheny-county-dhs/
- FairFare. 2023. FairFare: Unveiling Ridehail Fairness. Retrieved January 16, 2024 from https://getfairfare.org/
- Katherine B. Forrest. 2021. When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence. World Scientific.
- Value Sensitive Design and Information Systems. https://doi.org/10.1007/978-94-007-7844-3_4
- Yasmin Gagne. 2019. How we fought our landlord’s secretive plan for facial recognition—and won. Retrieved January 16, 2024 from https://www.fastcompany.com/90431686/our-landlord-wants-to-install-facial-recognition-in-our-homes-but-were-fighting-back
- William Gavin. 2022. As privacy concerns arise, organizations using facial recognition technology spend on lobbying. https://www.opensecrets.org/news/2022/03/as-privacy-concerns-arise-organizations-using-facial-recognition-technology-continue-to-employ-lobbyists/
- Albert Gehami and Leila Doty. 2023. When the Rubber Meets the Road: Experience Implementing AI Governance in a Public Agency with the City of San José. (2023). https://www.youtube.com/watch?v=Bif3fwI_d20 ACM FAccT Tutorial.
- The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1292–1310. https://doi.org/10.1145/3593013.3594081
- Lauren Kaori Gurley. 2021. Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make. Retrieved January 16, 2024 from https://www.vice.com/en/article/88npjv/amazons-ai-cameras-are-punishing-drivers-for-mistakes-they-didnt-make
- Rebecca Hanchett. 2022. Rhode Island Looks At Banning Facial Recognition By Sports Betting Apps. Retrieved January 16, 2024 from https://www.gamingtoday.com/news/rhode-island-banning-facial-recognition-sports-betting-apps/
- Karen Hao. 2020. The two-year fight to stop Amazon from selling face recognition to the police. https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/
- Bernard E. Harcourt. 2006. Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. The Chicago University Press.
- Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf
- Caroline Haskins. 2019. Dozens of Cities Have Secretly Experimented With Predictive Policing Software. https://www.vice.com/en/article/d3m7jq/dozens-of-cities-have-secretly-experimented-with-predictive-policing-software
- Caroline Haskins. 2021. The NYPD Has Misled The Public About Its Use Of Facial Recognition Tool Clearview AI. https://www.buzzfeednews.com/article/carolinehaskins1/nypd-has-misled-public-about-clearview-ai-use
- Kashmir Hill. 2020a. The Secretive Company That Might End Privacy as We Know It. Retrieved January 16, 2024 from https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
- Kashmir Hill. 2020b. Wrongfully Accused by an Algorithm. The New York Times (Jun 2020). https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
- Sally Ho and Garance Burke. 2022. An algorithm that screens for child neglect raises concerns. Retrieved January 16, 2024 from https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1
- Sally Ho and Garnace Burke. 2023. Child welfare algorithm faces Justice Department scrutiny. https://apnews.com/article/justice-scrutinizes-pittsburgh-child-welfare-ai-tool-4f61f45bfc3245fd2556e886c2da988b,lastaccessed={January16,2024},
- Sarah Homewood. 2019. Inaction as a Design Decision: Reflections on Not Designing Self-Tracking Tools for Menopause. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290607.3310430
- Bonnie Honig. 2021. A Feminist Theory of Refusal. Harvard University Press.
- Amanda Hoover. 2023. An Eating Disorder Chatbot Is Suspended for Giving Harmful Advice. Retrieved January 16, 2024 from https://www.wired.com/story/tessa-chatbot-suspended/
- Values in Repair. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 1403–1414. https://doi.org/10.1145/2858036.2858470
- Cracks in the Success Narrative: Rethinking Failure in Design Research through a Retrospective Trioethnography. ACM Trans. Comput.-Hum. Interact. 28, 6, Article 42 (nov 2021), 31 pages. https://doi.org/10.1145/3462447
- https://www.theguardian.com/profile/justinmccurry. [n. d.]. South Korean AI chatbot pulled from Facebook after hate speech towards minorities — theguardian.com. https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook. [Accessed 22-01-2024].
- Benefits Tech Advocacy Hub. 2022a. Arkansas Medicaid Home and Community Based Services Hours Cuts. Retrieved January 16, 2024 from https://www.btah.org/case-study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html
- Benefits Tech Advocacy Hub. 2022b. Idaho Medicaid Home and Community Based Services Care Cuts. Retrieved January 16, 2024 from https://www.btah.org/case-study/idaho-medicaid-home-and-community-based-services-care-cuts.html
- Benefits Tech Advocacy Hub. 2022c. Missouri Medicaid Home and Community Based Services Eligibility Issues. Retrieved January 16, 2024 from https://www.btah.org/case-study/missouri-medicaid-home-and-community-based-services-eligibility-issues.html
- Benefits Tech Advocacy Hub. 2022d. Understanding the Lifecycle of Benefits Technology. Retrieved January 16, 2024 from https://www.btah.org/lifecycle.html
- The Coalition Against Predictive Policing in Pittsburgh. 2020a. Predictive Policing in Pittsburgh: A Primer. Retrieved January 16, 2024 from https://capp-pgh.com/files/Primer_v1.pdf
- The Coalition Against Predictive Policing in Pittsburgh. 2020b. Responding to the “Completion” of Predictive Policing in Pittsburgh. Retrieved January 16, 2024 from https://capp-pgh.com/files/Metro21%20Counter-Statement.pdf
- Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). ACM. https://doi.org/10.1145/3442188.3445901
- The Surveillance AI Pipeline. arXiv:2309.15084 [cs.CV]
- Hamid Khan and Pete White. 2021. Police Surveillance Can’t Be Reformed. It Must Be Abolished. Retrieved January 16, 2024 from https://www.vice.com/en/article/xgzj7n/police-surveillance-cant-be-reformed-it-must-be-abolished
- Rebecca Klar. 2023. Exclusive: Meta faces pressure to support independent audit of risk oversight committee. Retrieved January 16, 2024 from https://news.yahoo.com/exclusive-meta-faces-pressure-support-164507051.html
- Amy Kraft. 2016. Microsoft shuts down AI chatbot after it turned into a Nazi. Retrieved January 16, 2024 from https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
- Junhyup Kwon and Hyeong Yun. 2021. AI Chatbot Shut Down After Learning To Talk Like a Racist Asshole. Retrieved January 16, 2024 from https://www.vice.com/en/article/akd4g5/ai-chatbot-shut-down-after-learning-to-talk-like-a-racist-asshole
- Colin Lecher. 2018. What happens when an algorithm cuts your health care. https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy
- Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
- Peter Lee. 2016. Learning from Tay’s introduction. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
- Calvin Liang. 2021. Reflexivity, positionality, and disclosure in HCI. Retrieved January 16, 2024 from https://medium.com/@caliang/reflexivity-positionality-and-disclosure-in-hci-3d95007e9916
- Q. Vera Liao and Jennifer Wortman Vaughan. 2023. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. arXiv:2306.01941 [cs.HC]
- Kevin De Liban. 2018. Comments re: Notice of Rule-Making for ARChoices Program. Retrieved January 21, 2024 from https://www.arkleg.state.ar.us/Home/FTPDocument?path=%2FAssembly%2FMeeting+Attachments%2F430%2F663%2FHandout+1+-Legal+Aid-Kevin+De+Liban.pdf Email providing public comment on proposed revisions to ARChoices program..
- Who Should Pay When Machines Cause Harm? Laypeople’s Expectations of Legal Damages for Machine-Caused Harm. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 236–246. https://doi.org/10.1145/3593013.3593992
- Antony Loewenstein. 2023. The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World. Verso. https://www.versobooks.com/products/2684-the-palestine-laboratory
- Lynn Lomibao. 2021. Stop LAPD Spying Sues LAPD to Uncover Communications with UCLA Professor Who Founded PredPol, Inc. https://stoplapdspying.org/stop-lapd-spying-sues-lapd-for-communications-with-widely-condemned-ucla-professor-who-founded-predpol-inc/
- My Ly. 2023. Arkansas DHS agrees to pay $460,000 to settle case over in-home care cuts. Retrieved January 16, 2024 from https://www.arkansasonline.com/news/2023/aug/09/arkansas-dhs-agrees-to-pay-460000-to-settle-case/
- Angelica Mari. 2022. São Paulo subway ordered to suspend use of facial recognition. https://www.zdnet.com/article/sao-paulo-subway-ordered-to-suspend-use-of-facial-recognition/
- Jesse Marx and Lilly Irani. 2021. Redacted. Taller California. https://www.printedmatter.org/catalog/58378/
- Carole McGranahan. 2016. Theorizing Refusal: An Introduction. Cultural Anthropology 31 (08 2016), 319–325. https://doi.org/10.14506/ca31.3.01
- Sean McGregor. 2020. Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. arXiv:2011.08512 [cs.CY]
- James Meadway. 2020. “Fuck the Algorithm”: How A-Level Students Have Shown the Future of Protest. https://novaramedia.com/2020/08/17/fuck-the-algorithm-how-a-level-students-have-shown-future-of-protest/
- Dhruv Mehrotra and Dell Cameron. 2023. The Maker of ShotSpotter Is Buying the World’s Most Infamous Predictive Policing Tech. Retrieved January 16, 2024 from https://www.wired.com/story/soundthinking-geolitica-acquisition-predictive-policing/
- Brian Merchant. 2023. Blood in the Machine: The Origins of the Rebellion Against Big Tech. Little, Brown and Company.
- Jacob Metcalf. 2023. What federal agencies can learn from NYC’s AI Hiring Law. Retrieved January 16, 2024 from https://thehill.com/opinion/technology/4360523-what-federal-agencies-can-learn-from-new-york-citys-ai-hiring-law/
- Taking Algorithms to Courts: A Relational Approach to Algorithmic Accountability. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1450–1462. https://doi.org/10.1145/3593013.3594092
- Microsoft. 2022. Microsoft Responsible AI Standard, v2 (General Requirements). Retrieved July 21, 2023 from https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf
- Yeshimabeit Milner. 2020. Abolish Big Data. https://medium.com/@YESHICAN/abolish-big-data-ad0871579a41
- Aaron Mok. 2023. AI is expensive. A search on Google’s chatbot Bard costs the company 10 times more than a regular one, which could amount to several billion dollars. Retrieved January 16, 2024 from https://www.businessinsider.com/ai-expensive-google-chatbot-bard-may-cost-company-billions-dollars-2023-2
- Assembling accountability: algorithmic impact assessment for the public interest. Retrieved July 21, 2023 from https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf
- Mozilla. 2023. Auditing AI: Announcing the 2023 Mozilla Technology Fund Cohort. Retrieved January 16, 2024 from https://foundation.mozilla.org/en/blog/auditing-ai-announcing-the-2023-mozilla-technology-fund-cohort/
- Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. http://www.jstor.org/stable/j.ctt1pwt9w5
- Oryem Nyeko. 2023. Uganda: Rights Concerns Over License Plate Tracking — Human Rights Watch. https://www.hrw.org/news/2023/11/14/uganda-rights-concerns-over-license-plate-tracking
- Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
- The Government of Canada. 2023. Algorithmic Impact Assessment tool. Retrieved July 21, 2023 from https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html
- The White House Office of Science and Technology Policy. 2022. Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age. Retrieved October 2, 2023 from https://www.whitehouse.gov/ostp/news-updates/2022/10/04/blueprint-for-an-ai-bill-of-rightsa-vision-for-protecting-our-civil-rights-in-the-algorithmic-age/
- UN OHCHR. 2022. OHCHR Assessment of Human Rights Concerns in the Xinjiang Uyghur Autonomous Region, People’s Republic of China.
- Mitigating Bias in Algorithmic Systems—A Fish-eye View. ACM Comput. Surv. 55, 5, Article 87 (dec 2022), 37 pages. https://doi.org/10.1145/3527152
- Phil Pennington. 2023. Facial recognition: Officials yet to meet obligation to seek views of Māori - documents. https://www.rnz.co.nz/news/national/501761/facial-recognition-officials-yet-to-meet-obligation-to-seek-views-of-maori-documents
- Nick Perry. 2023. New Zealand debates whether ethnicity should be a factor for surgery waitlists. https://apnews.com/article/new-zealand-surgery-ethnicity-algorithm-maori-1b44026f2661772a7eb3bd4444619446
- Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning. ACM Comput. Surv. 55, 3, Article 51 (feb 2022), 44 pages. https://doi.org/10.1145/3494672
- Brendan Pierson and Brendan Pierson. 2023. Lawsuit claims UnitedHealth AI wrongfully denies elderly extended care. Reuters (Nov 2023). https://www.reuters.com/legal/lawsuit-claims-unitedhealth-ai-wrongfully-denies-elderly-extended-care-2023-11-14/
- Jon Porter. 2020. UK ditches exam results generated by biased algorithm after student protests. Retrieved January 16, 2024 from https://www.theverge.com/2020/8/17/21372045/uk-a-level-results-algorithm-biased-coronavirus-covid-19-pandemic-university-applications
- Gabriel Puron-Cid and J. Ramon Gil-Garcia. 2022. Are Smart Cities Too Expensive in the Long Term? Analyzing the Effects of ICT Infrastructure on Municipal Financial Sustainability. Sustainability 14, 10 (2022). https://doi.org/10.3390/su14106055
- Inioluwa Deborah Raji and Joy Buolamwini. 2022. Actionable Auditing Revisited: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Commun. ACM 66, 1 (dec 2022), 101–108. https://doi.org/10.1145/3571151
- The Fallacy of AI Functionality. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 959–972. https://doi.org/10.1145/3531146.3533158
- Towards Accountability Infrastructure: Gaps and opportunities in AI audit tooling. (2024). Preprint (Under Review).
- Algorithmic Impact Assessments: A Practical Framework for Public Agency. Retrieved July 21, 2023 from https://www.nist.gov/system/files/documents/2021/10/04/aiareport2018.pdf
- ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv:1602.04938 [cs.LG]
- Values in Emotion Artificial Intelligence Hiring Services: Technosolutions to Organizational Problems. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 109 (apr 2023), 28 pages. https://doi.org/10.1145/3579543
- Casey Ross. 2021. Epic’s AI algorithms, shielded from scrutiny by a corporate firewall, are delivering inaccurate information on seriously ill patients. Retrieved January 21, 2024 from https://www.statnews.com/2021/07/26/epic-hospital-algorithms-sepsis-investigation/
- Tate Ryan-Mosley. 2023a. An algorithm intended to reduce poverty might disqualify people in need. Retrieved January 16, 2024 from https://www.technologyreview.com/2023/06/13/1074551/an-algorithm-intended-to-reduce-poverty-in-jordan-disqualifies-people-in-need/
- Tate Ryan-Mosley. 2023b. How face recognition rules in the US got stuck in political gridlock. https://www.technologyreview.com/2023/07/24/1076668/how-face-recognition-rules-in-the-us-got-stuck-in-political-gridlock/
- Nete Schwennesen. 2019. Algorithmic assemblages of care: imaginaries, epistemologies and repair work. Sociology of Health & Illness 41 (10 2019), 176–192. https://doi.org/10.1111/1467-9566.12900
- Joey Scott. 2023. LAPD Is Using Israeli Surveillance Software That Can Track Your Phone and Social Media. Retrieved January 16, 2024 from https://knock-la.com/lapd-is-using-israeli-surveillance-software-that-can-track-your-phone-and-social-media/#:~:text=During%20a%202014%20trip%20to,would%20be%20using%20all%20three
- Nick Seaver. 2019. Knowing Algorithms. Princeton University Press, Princeton, 412–422. https://doi.org/doi:10.1515/9780691190600-028
- Andrew D. Selbst and Solon Barocas. 2018. The Intuitive Appeal of Explainable Machines. Fordham Law Review 87 (2018), 1085. https://api.semanticscholar.org/CorpusID:59548063
- Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies. Ohio State Law Journal 85 (2024). https://ssrn.com/abstract=4564304
- Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (, Montréal, QC, Canada,) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 723–741. https://doi.org/10.1145/3600211.3604673
- Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 433 (oct 2021), 29 pages. https://doi.org/10.1145/3479577
- Audra Simpson. 2007. On Ethnographic Refusal: Indigeneity, ‘Voice’ and Colonial Citizenship. Junctures (2007).
- Michael Sisitzky and Ben Schaefer. 2021. The NYPD Published its Arsenal of Surveillance Tech. Here’s What We Learned. Retrieved January 16, 2024 from https://www.nyclu.org/en/news/nypd-published-its-arsenal-surveillance-tech-heres-what-we-learned
- A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual &Group Unfairness via Inequality Indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (London, United Kingdom) (KDD ’18). Association for Computing Machinery, New York, NY, USA, 2239–2248. https://doi.org/10.1145/3219819.3220046
- Imagining New Futures beyond Predictive Systems in Child Welfare: A Qualitative Study with Impacted Stakeholders. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1162–1177. https://doi.org/10.1145/3531146.3533177
- What Makes a Good Explanation?: A Harmonized View of Properties of Explanations. In Progress and Challenges in Building Trustworthy Embodied AI. https://openreview.net/forum?id=YDyLZWwpBK2
- Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (–, NY, USA) (EAAMO ’21). Association for Computing Machinery, New York, NY, USA, Article 17, 9 pages. https://doi.org/10.1145/3465416.3483305
- The Automated Regional Justice Information System. 2021. TACIDS: Tactical Identification System Using Facial Recognition. Retrieved January 16, 2024 from https://voiceofsandiego.org/wp-content/uploads/2021/04/TACIDS-Final-Report-FINAL.pdf
- Kim TallBear. 2013. Native American DNA: Tribal Belonging and the False Promise of Genetic Science. U of Minnesota Press.
- A. Toh and Human Rights Watch. 2023. Automated Neglect: How the World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights. Human Rights Watch. https://books.google.com/books?id=rNMG0AEACAAJ
- Allegheny Family Screening Tool: Methodology, Version 2. Retrieved January 21, 2024 from https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Methodology-V2-from-16-ACDHS-26_PredictiveRisk_Package_050119_FINAL-7.pdf
- Matías Valderrama. 2021. Sistema Alerta Niñez y la predicción del riesgo de vulneración de derechos de la infancia. Derechos Digitales (2021).
- C van Veen. 2020. Landmark judgment from The Netherlands on digital welfare states and human rights. Open Global Rights (2020).
- Joana Varon and Paz Peña. 2022. Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems. Carr Center Discussion Paper Series (2022). https://carrcenter.hks.harvard.edu/publications/notmyai
- James Vincent. [n. d.]. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day — theverge.com. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist. [Accessed 22-01-2024].
- Rent Going Up? One Company’s Algorithm Could Be Why. Retrieved January 16, 2024 from https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent
- Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 626. https://doi.org/10.1145/3593013.3594030
- Fulton Wang and Cynthia Rudin. 2015. Falling Rule Lists. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 38), Guy Lebanon and S. V. N. Vishwanathan (Eds.). PMLR, San Diego, California, USA, 1013–1022. https://proceedings.mlr.press/v38/wang15a.html
- American Dragnet: Data-Driven Deportation in the 21st Century. (2022). Retrieved January 16, 2024 from https://americandragnet.org/
- Elizabeth Warren. 2023. Warren, Lawmakers Urge Justice Department to Review YieldStar, Warn of De-Facto Price Setting and Collusion After Senate Investigation. Retrieved January 21, 2024 from https://www.warren.senate.gov/oversight/letters/warren-lawmakers-urge-justice-department-to-review-yieldstar-warn-of-de-facto-price-setting-and-collusion-after-senate-investigation Press Release.
- Human Rights Watch. 2023. Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights. Retrieved January 16, 2024 from https://www.hrw.org/report/2023/06/13/automated-neglect/how-world-banks-push-allocate-cash-assistance-using-algorithms
- Fairlearn: Assessing and Improving Fairness of AI Systems. arXiv:2303.16626 [cs.LG]
- Emma Weil and Elizabeth Edwards. 2023. Using Technical Skills to Fight Actual Public Benefits Cuts and Austerity Policies, with the Benefits Tech Advocacy Hub. (2023). https://www.youtube.com/watch?v=ZELeRPx74PE ACM FAccT Tutorial.
- Kate Wells. 2023. An eating disorders chatbot offered dieting advice, raising fears about AI in health. Retrieved January 16, 2024 from https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea
- HCI Tactics for Politics from Below: Meeting the Challenges of Smart Cities. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (, Yokohama, Japan,) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 297, 15 pages. https://doi.org/10.1145/3411764.3445314
- Zack Whittaker. 2019. Amazon shareholders reject facial recognition sale ban to governments. Retrieved January 16, 2024 from https://techcrunch.com/2019/05/22/amazon-reject-facial-recognition-proposals/
- Christopher Wilkinson and Dorthy Lukens. 2023. The Growing Regulation of AI-Based Employment Decision Tools. Retrieved January 16, 2024 from https://www.perkinscoie.com/en/news-insights/the-growing-regulation-of-ai-based-employment-decision-tools.html
- Timothy Williams. 2015. Facial Recognition Software Moves From Overseas Wars to Local Police. Retrieved January 16, 2024 from https://www.nytimes.com/2015/08/13/us/facial-recognition-software-moves-from-overseas-wars-to-local-police.html?_r=1
- Tom Wilson and Madhumita Murgia. 2019. Uganda confirms use of Huawei facial recognition cameras. Retrieved January 16, 2024 from https://www.ft.com/content/e20580de-c35f-11e9-a8e9-296ca66511c9
- External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Internal Medicine 181, 8 (08 2021), 1065–1070. https://doi.org/10.1001/jamainternmed.2021.2626 arXiv:https://jamanetwork.com/journals/jamainternalmedicine/articlepdf/2781307/jamainternal_wong_2021_oi_210027_1627674961.11707.pdf
- Sarah Wu. 2019. Somerville City Council passes facial recognition ban. Retrieved January 16, 2024 from https://www.bostonglobe.com/metro/2019/06/27/somerville-city-council-passes-facial-recognition-ban/
- Chloe Xiang. 2023. Eating Disorder Helpline Disables Chatbot for ’Harmful’ Responses After Firing Human Staff. Retrieved January 16, 2024 from https://www.vice.com/en/article/qjvk97/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff
- Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 450 (oct 2021), 24 pages. https://doi.org/10.1145/3479594
- Creating Design Resources to Scaffold the Ideation of AI Concepts. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (, Pittsburgh, PA, USA,) (DIS ’23). Association for Computing Machinery, New York, NY, USA, 2326–2346. https://doi.org/10.1145/3563657.3596058
- Jonathan Zong and J. Nathan Matias. 2023. Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design. ACM J. Responsib. Comput. (oct 2023). https://doi.org/10.1145/3630107 Just Accepted.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.