Auditing Work: Exploring the New York City algorithmic bias audit regime (2402.08101v1)
Abstract: In July 2023, New York City (NYC) initiated the first algorithm auditing system for commercial machine-learning systems. Local Law 144 (LL 144) mandates NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to undergo annual bias audits conducted by an independent auditor. This paper examines lessons from LL 144 for other national algorithm auditing attempts. Through qualitative interviews with 16 experts and practitioners within the regime, we find that LL 144 has not effectively established an auditing regime. The law fails to clearly define key aspects, such as AEDTs and independent auditors, leading auditors, AEDT vendors, and companies using AEDTs to define the law's practical implementation in ways that failed to protect job applicants. Contributing factors include the law's flawed transparency-driven theory of change, industry lobbying narrowing the definition of AEDTs, practical and cultural challenges faced by auditors in accessing data, and wide disagreement over what constitutes a legitimate auditor, resulting in four distinct 'auditor roles.' We conclude with four recommendations for policymakers seeking to create similar bias auditing regimes, emphasizing clearer definitions, metrics, and increased accountability. By exploring LL 144 through the lens of auditors, our paper advances the evidence base around audit as an accountability mechanism, providing guidance for policymakers seeking to create similar regimes.
- [n. d.]. IAAA - International Algorithmic Auditors Association. ([n. d.]). https://iaaa-algorithmicauditors.org/
- 2024. Code of Federal Regulations. 29 CFR Part 1607 - General Principles (2024). https://www.wired.com/story/opinion-new-york-citys-surveillance-battle-offers-national-lessons/
- Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN (2016).
- Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. California law review (2016), 671–732.
- Customer-driven misconduct: How competition corrupts business practices. Management Science 59, 8 (2013), 1725–1742.
- AI Assurance? https://www.adalovelaceinstitute.org/report/risks-ai-systems/
- Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
- Albert Fox Cahn. 2021. New York City’s Surveillance Battle Offers National Lessons. Wired (2021). https://www.wired.com/story/opinion-new-york-citys-surveillance-battle-offers-national-lessons/
- Pew Research Center. 2023. AI in Hiring and Evaluation of Workers: What People Think. (2023). https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/
- Kathy Charmaz. 2023. Constructing grounded theory: A practical guide through qualitative analysis. sage.
- Understanding accountability in algorithmic supply chains. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1186–1197.
- European Commission. 2024. The Digital Services Act package — Shaping Europe’s digital future. (2024). https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
- Equal Employment Opportunity Commission. 2007. Employment Tests and Selection Procedures. (2007). https://www.eeoc.gov/laws/guidance/employment-tests-and-selection-procedures
- Equal Employment Opportunity Commission. 2022. Definitions of Race and Ethnicity Categories. Data Collection (2022).
- Equal Employment Opportunity Commission. 2023. Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. (2023). https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial
- Australian Competition and Consumer Commission. 2020. Trivago misled consumers about hotel room rates. (2020). https://www.accc.gov.au/media-release/trivago-misled-consumers-about-hotel-room-rates
- US Congress. 2002. The Sarbanes Oxley Act. (2002).
- New York City Council. 2021. A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools. (2021). https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&Options=Advanced&Search=
- Donald R Deis Jr and Gary A Giroux. 1992. Determinants of audit quality in the public sector. Accounting review (1992), 462–479.
- Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259–268.
- Marissa Gerchick and Brooke Watson. 2023. Tracking Automated Employment Decision Tool Bias Audits. (2023). https://github.com/aclu-national/tracking-ll144-bias-audits
- Ellen P Goodman and Julia Trehu. 2022. Algorithmic Auditing: Chasing AI Accountability. Santa Clara High Tech. LJ 39 (2022), 289.
- Algorithmic impact assessment: a case study in healthcare. (2022). https://www.adalovelaceinstitute.org/wp-content/uploads/2022/02/Algorithmic-impact-assessment-a-case-study-in-healthcare.pdf
- Indeed. 2023. The Indeed Global AI Survey: Your Guide to the Future of Hiring. (2023). https://www.indeed.com/lead/the-indeed-ai-report?hl=en#form
- Irina Ivanova. 2020. New York City wants to restrict artificial intelligence in hiring. CBS News (2020). https://www.cbsnews.com/news/new-york-city-artificial-intelligence-hiring-restriction/
- Keeping an eye on AI. Ada Lovelace Institute (2023). https://www.adalovelaceinstitute.org/wp-content/uploads/2023/09/ALI_Keeping-an-eye-on-AI-2023.pdf
- Lauren Kirchner. 2023. New York City Moves to Create Accountability for Algorithms. ProPublica (2023). https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms
- Towards algorithm auditing: a survey on managing legal, ethical and technological risks of AI, ML and associated algorithms. (2021).
- David Leslie. 2019. Understanding artificial intelligence ethics and safety. arXiv preprint arXiv:1906.05684 (2019).
- Steve Lohr. 2023. A Hiring Law Blazes a Path for A.I. Regulation. The New York Times (2023). https://www.nytimes.com/2023/05/25/technology/ai-hiring-law-new-york.html
- New York City Department of Consumer and Worker Protections. 2023a. Automated Employment Decision Tools: Frequently Asked Questions. (2023). https://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf
- New York City Department of Consumer and Worker Protections. 2023b. Automated Employment Decision Tools (Updated). (2023). https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/
- Colorado Division of Insurance. 2023. Regulation 10-1-1 Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models. (2023). https://drive.google.com/file/d/1dlPKJCDo76iHfJZDopQEhTDCmKbuYnNI/view?usp=embed_facebook
- US Department of Justice. 2015. 9-48.000 - Computer Fraud and Abuse Act. (2015). https://www.justice.gov/jm/jm-9-48000-computer-fraud
- Information Commissioner’s Office. 2020. Guidance on the AI auditing framework Draft guidance for consultation. Information Commissioner’s Office. (2020). https://ico.org.uk/media/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf
- UK Parliament. 2022. Online Safety Act 2023. (2022). https://bills.parliament.uk/bills/3137
- Jules Pattison-Gordon. 2023. Colorado Aims to Prevent AI-Driven Discrimination in Insurance. GovTech (2023). https://www.govtech.com/policy/colorado-aims-to-prevent-ai-driven-discrimination-in-insurance
- PCAOB. [n. d.]. Driving improvement in audit quality to protect investors. ([n. d.]). https://pcaobus.org/
- Evani Radiya-Dixit and Gina Neff. 2023. A Sociotechnical Audit: Assessing Police Use of Facial Recognition. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1334–1346.
- Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 469–481.
- Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 429–435.
- Outsider oversight: Designing a third party audit ecosystem for ai governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 557–571.
- Algemene Rekenkamer. 2022. An Audit of 9 Algorithms used by the Dutch Government - Report - Netherlands Court of Audit. (2022). https://english.rekenkamer.nl/publications/reports/2022/05/18/an-audit-of-9-algorithms-used-by-the-dutch-government
- Yvette D. [D-NY-9] Rep. Clarke. 2022. Algorithmic Accountability Act of 2022. (2022). https://www.congress.gov/bill/117th-congress/house-bill/6580/text
- Andrew D Selbst. 2017. Disparate impact in big data policing. Ga. L. Rev. 52 (2017), 109.
- Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59–68.
- SHRM. 2022. Automation and AI in HR. (2022). https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf?_ga=2.112869508.1029738808.1666019592-61357574.1655121608
- Nopmanee Tepalagul and Ling Lin. 2015. Auditor independence and audit quality: A literature review. Journal of Accounting, Auditing & Finance 30, 1 (2015), 101–121.
- The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness. arXiv preprint arXiv:2202.09519 (2022).
- Lara Groves (2 papers)
- Jacob Metcalf (5 papers)
- Alayna Kennedy (1 paper)
- Briana Vecchione (7 papers)
- Andrew Strait (5 papers)